This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 2 failed / 1691 succeeded
Started2019-06-11 08:42
Elapsed30m33s
Revision
Buildergke-prow-containerd-pool-bigger-170c3937-6zzk
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/de74eed9-22b9-449b-81a5-64e4fb085e59/targets/test'}}
podb8b9337a-8c24-11e9-9cdf-225cd3053b6a
resultstorehttps://source.cloud.google.com/results/invocations/de74eed9-22b9-449b-81a5-64e4fb085e59/targets/test
infra-commit1a71afd20
podb8b9337a-8c24-11e9-9cdf-225cd3053b6a
repok8s.io/kubernetes
repo-commit8de1569ddae62e8fab559fe6bd210a5d6100a277
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver/admissionwebhook TestWebhookV1beta1/storage.k8s.io.v1.storageclasses/deletecollection 0.00s

go test -v k8s.io/kubernetes/test/integration/apiserver/admissionwebhook -run TestWebhookV1beta1/storage.k8s.io.v1.storageclasses/deletecollection$
from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190611-085714.xml

k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 34s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
I0611 09:03:24.633695  109206 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0611 09:03:24.633730  109206 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0611 09:03:24.633742  109206 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0611 09:03:24.633786  109206 master.go:233] Using reconciler: 
I0611 09:03:24.635701  109206 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.635940  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.636024  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.636093  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.636234  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.637126  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.637179  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.637379  109206 store.go:1343] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0611 09:03:24.637462  109206 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0611 09:03:24.638390  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.637415  109206 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.639248  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.639281  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.639429  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.639558  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.640013  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.640409  109206 store.go:1343] Monitoring events count at <storage-prefix>//events
I0611 09:03:24.640573  109206 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.640680  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.640724  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.640783  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.640498  109206 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0611 09:03:24.641147  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.641522  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.641636  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.641787  109206 store.go:1343] Monitoring limitranges count at <storage-prefix>//limitranges
I0611 09:03:24.641848  109206 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.641944  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.641980  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.642023  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.641980  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.642143  109206 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0611 09:03:24.642221  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.642522  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.642669  109206 store.go:1343] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0611 09:03:24.642857  109206 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.642930  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.642940  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.642972  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.643013  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.643046  109206 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0611 09:03:24.643269  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.643499  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.643500  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.643550  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.643636  109206 store.go:1343] Monitoring secrets count at <storage-prefix>//secrets
I0611 09:03:24.643729  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.643781  109206 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.643838  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.643847  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.643874  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.643915  109206 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0611 09:03:24.644208  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.644450  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.644557  109206 store.go:1343] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0611 09:03:24.644700  109206 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.644767  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.644778  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.644809  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.644851  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.644865  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.644882  109206 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0611 09:03:24.645099  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.645108  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.645378  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.645486  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.645520  109206 store.go:1343] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0611 09:03:24.645672  109206 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.645736  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.645747  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.645776  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.645818  109206 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0611 09:03:24.645966  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.646196  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.646269  109206 store.go:1343] Monitoring configmaps count at <storage-prefix>//configmaps
I0611 09:03:24.646424  109206 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.646487  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.646493  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.646524  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.646563  109206 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0611 09:03:24.646497  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.646685  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.646797  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.647162  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.647336  109206 store.go:1343] Monitoring namespaces count at <storage-prefix>//namespaces
I0611 09:03:24.647473  109206 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0611 09:03:24.647380  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.647621  109206 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.647690  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.647710  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.647728  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.647738  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.647788  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.647803  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.648016  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.648066  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.648111  109206 store.go:1343] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0611 09:03:24.648250  109206 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.648313  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.648323  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.648376  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.648420  109206 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0611 09:03:24.648567  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.648622  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.649338  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.649493  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.649534  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.649624  109206 store.go:1343] Monitoring nodes count at <storage-prefix>//minions
I0611 09:03:24.649690  109206 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0611 09:03:24.649763  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.649831  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.649842  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.649873  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.649917  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.650664  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.650741  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.650771  109206 store.go:1343] Monitoring pods count at <storage-prefix>//pods
I0611 09:03:24.650856  109206 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0611 09:03:24.650888  109206 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.650944  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.650954  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.650983  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.651049  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.651290  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.651399  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.651422  109206 store.go:1343] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0611 09:03:24.651475  109206 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0611 09:03:24.651559  109206 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.651616  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.651633  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.651662  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.651778  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.651817  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.652096  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.652166  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.652246  109206 store.go:1343] Monitoring services count at <storage-prefix>//services/specs
I0611 09:03:24.652271  109206 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.652366  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.652377  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.652405  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.652450  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.652513  109206 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0611 09:03:24.653411  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.653900  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.653993  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.654003  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.654028  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.654062  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.654104  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.654431  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.654594  109206 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.654654  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.654664  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.654694  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.654756  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.654797  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.655224  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.655235  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.655355  109206 store.go:1343] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0611 09:03:24.655500  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.655527  109206 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0611 09:03:24.655834  109206 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.655998  109206 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.656616  109206 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.657292  109206 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.657917  109206 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.659540  109206 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.660196  109206 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.660410  109206 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.660746  109206 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.661264  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.662112  109206 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.663095  109206 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.663284  109206 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.664078  109206 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.664331  109206 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.664915  109206 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.665197  109206 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.665792  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.665967  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.666179  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.666297  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.666463  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.666579  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.666731  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.667289  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.668386  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.668714  109206 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.669536  109206 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.670327  109206 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.670665  109206 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.670892  109206 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.671665  109206 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.672011  109206 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.672711  109206 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.673582  109206 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.674334  109206 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.675213  109206 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.675484  109206 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.675595  109206 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0611 09:03:24.675621  109206 master.go:425] Enabling API group "authentication.k8s.io".
I0611 09:03:24.675643  109206 master.go:425] Enabling API group "authorization.k8s.io".
I0611 09:03:24.675796  109206 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.675901  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.675995  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.676081  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.676164  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.677066  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.677110  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.677295  109206 store.go:1343] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0611 09:03:24.677480  109206 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.677546  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.677558  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.677594  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.677655  109206 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0611 09:03:24.677921  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.678299  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.678408  109206 store.go:1343] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0611 09:03:24.678546  109206 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.678609  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.678619  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.678656  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.678695  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.678729  109206 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0611 09:03:24.678933  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.679158  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.679226  109206 store.go:1343] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0611 09:03:24.679240  109206 master.go:425] Enabling API group "autoscaling".
I0611 09:03:24.679395  109206 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.679452  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.679461  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.679488  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.679524  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.679556  109206 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0611 09:03:24.679775  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.679776  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.680093  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.680133  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.680286  109206 store.go:1343] Monitoring jobs.batch count at <storage-prefix>//jobs
I0611 09:03:24.680432  109206 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.680490  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.680500  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.680528  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.680578  109206 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0611 09:03:24.680769  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.680888  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.681961  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.682026  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.682131  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.682332  109206 store.go:1343] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0611 09:03:24.682467  109206 master.go:425] Enabling API group "batch".
I0611 09:03:24.682413  109206 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0611 09:03:24.682659  109206 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.682817  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.682834  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.682865  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.682933  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.683261  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.683272  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.683353  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.683507  109206 store.go:1343] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0611 09:03:24.683530  109206 master.go:425] Enabling API group "certificates.k8s.io".
I0611 09:03:24.683664  109206 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.683728  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.683737  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.683766  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.683811  109206 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0611 09:03:24.683994  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.684642  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.684842  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.684869  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.685020  109206 store.go:1343] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0611 09:03:24.685065  109206 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0611 09:03:24.685452  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.686321  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.686329  109206 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.687000  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.687042  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.687076  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.687189  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.688398  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.688455  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.688608  109206 store.go:1343] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0611 09:03:24.688636  109206 master.go:425] Enabling API group "coordination.k8s.io".
I0611 09:03:24.688796  109206 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.688863  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.688879  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.688916  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.688967  109206 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0611 09:03:24.689160  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.690309  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.691148  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.691266  109206 store.go:1343] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0611 09:03:24.691427  109206 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.691493  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.691509  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.691545  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.691590  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.691633  109206 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0611 09:03:24.691830  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.692110  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.692238  109206 store.go:1343] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0611 09:03:24.692976  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.693125  109206 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0611 09:03:24.693891  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.693994  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.694011  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.694069  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.694119  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.694152  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.695291  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.696107  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.696298  109206 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0611 09:03:24.696494  109206 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.696565  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.696580  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.696610  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.696675  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.696709  109206 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0611 09:03:24.696871  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.698004  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.698065  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.698162  109206 store.go:1343] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0611 09:03:24.698249  109206 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0611 09:03:24.698657  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.698725  109206 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.699114  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.699258  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.699271  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.699307  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.699930  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.700432  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.700524  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.700610  109206 store.go:1343] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0611 09:03:24.700653  109206 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0611 09:03:24.700770  109206 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.700837  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.700853  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.700896  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.700950  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.701284  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.701402  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.701631  109206 store.go:1343] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0611 09:03:24.701701  109206 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0611 09:03:24.702037  109206 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.702155  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.702206  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.702250  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.702085  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.702374  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.702627  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.702686  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.702715  109206 store.go:1343] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0611 09:03:24.702738  109206 master.go:425] Enabling API group "extensions".
I0611 09:03:24.702785  109206 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0611 09:03:24.702876  109206 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.702937  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.702947  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.702974  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.703118  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.703422  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.703442  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.703652  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.703802  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.704051  109206 store.go:1343] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0611 09:03:24.704198  109206 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0611 09:03:24.704587  109206 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.704814  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.704841  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.704907  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.705011  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.705147  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.706330  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.706467  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.706519  109206 store.go:1343] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0611 09:03:24.706586  109206 master.go:425] Enabling API group "networking.k8s.io".
I0611 09:03:24.706651  109206 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.706726  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.706749  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.706790  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.706548  109206 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0611 09:03:24.707397  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.708237  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.708614  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.708761  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.708937  109206 store.go:1343] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0611 09:03:24.709010  109206 master.go:425] Enabling API group "node.k8s.io".
I0611 09:03:24.709015  109206 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0611 09:03:24.709415  109206 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.709528  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.709646  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.709728  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.709812  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.710127  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.710708  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.710776  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.710890  109206 store.go:1343] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0611 09:03:24.710974  109206 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0611 09:03:24.711042  109206 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.711119  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.711137  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.711168  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.711214  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.711668  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.711795  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.711810  109206 store.go:1343] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0611 09:03:24.711824  109206 master.go:425] Enabling API group "policy".
I0611 09:03:24.711857  109206 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.711922  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.711936  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.711979  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.711864  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.711980  109206 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0611 09:03:24.712033  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.712416  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.712580  109206 store.go:1343] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0611 09:03:24.712765  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.712864  109206 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.713062  109206 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0611 09:03:24.713366  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.713814  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.713899  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.713983  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.714177  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.714773  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.714868  109206 store.go:1343] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0611 09:03:24.714905  109206 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.714964  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.714979  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.715311  109206 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0611 09:03:24.715316  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.715547  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.715629  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.716054  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.716621  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.716650  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.716767  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.716843  109206 store.go:1343] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0611 09:03:24.716863  109206 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0611 09:03:24.717420  109206 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.717501  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.717520  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.717556  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.717618  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.718165  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.718274  109206 store.go:1343] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0611 09:03:24.718319  109206 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.718405  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.718443  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.718499  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.718549  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.718600  109206 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0611 09:03:24.718638  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.718853  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.719165  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.719287  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.719517  109206 store.go:1343] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0611 09:03:24.719696  109206 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.719795  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.719818  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.719720  109206 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0611 09:03:24.719872  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.719928  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.720157  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.720254  109206 store.go:1343] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0611 09:03:24.720285  109206 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.720369  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.720404  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.720415  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.720448  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.720498  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.720505  109206 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0611 09:03:24.720699  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.720829  109206 store.go:1343] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0611 09:03:24.720992  109206 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.721094  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.721130  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.721187  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.721241  109206 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0611 09:03:24.721317  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.721479  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.721785  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.723191  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.723202  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.725272  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.726109  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.726262  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.726447  109206 store.go:1343] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0611 09:03:24.726498  109206 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0611 09:03:24.726546  109206 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0611 09:03:24.728446  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.730529  109206 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.730734  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.730776  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.730827  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.730906  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.731504  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.731913  109206 store.go:1343] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0611 09:03:24.732100  109206 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.732257  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.732300  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.732266  109206 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0611 09:03:24.732121  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.733024  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.733114  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.733113  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.733578  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.733908  109206 store.go:1343] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0611 09:03:24.733953  109206 master.go:425] Enabling API group "scheduling.k8s.io".
I0611 09:03:24.734130  109206 master.go:417] Skipping disabled API group "settings.k8s.io".
I0611 09:03:24.734293  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.734440  109206 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0611 09:03:24.734422  109206 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.734636  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.735212  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.735312  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.735414  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.736004  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.736749  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.736850  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.736973  109206 store.go:1343] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0611 09:03:24.737049  109206 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0611 09:03:24.737214  109206 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.737951  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.737974  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.738029  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.738101  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.738250  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.738424  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.739116  109206 store.go:1343] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0611 09:03:24.739159  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.739203  109206 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.739255  109206 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0611 09:03:24.739299  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.739320  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.739389  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.739457  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.739827  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.739958  109206 store.go:1343] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0611 09:03:24.739988  109206 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.740048  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.740058  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.740088  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.740175  109206 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0611 09:03:24.740264  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.740328  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.740413  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.740658  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.740745  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.740796  109206 store.go:1343] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0611 09:03:24.741768  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.742249  109206 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0611 09:03:24.742658  109206 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.742782  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.742807  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.742933  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.742934  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.743010  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.743721  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.743809  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.743918  109206 store.go:1343] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0611 09:03:24.744027  109206 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0611 09:03:24.744224  109206 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.744329  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.744389  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.744446  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.744504  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.744758  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.744917  109206 store.go:1343] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0611 09:03:24.744962  109206 master.go:425] Enabling API group "storage.k8s.io".
I0611 09:03:24.745054  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.745124  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.745194  109206 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0611 09:03:24.746101  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.746338  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.746482  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.746495  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.746529  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.746569  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.747259  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.747433  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.747494  109206 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0611 09:03:24.747533  109206 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0611 09:03:24.747818  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.747944  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.747968  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.748021  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.748077  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.748510  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.748822  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.748902  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.749277  109206 store.go:1343] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0611 09:03:24.749432  109206 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.749501  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.749517  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.749555  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.749605  109206 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0611 09:03:24.750615  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.751538  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.751611  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.751658  109206 store.go:1343] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0611 09:03:24.751775  109206 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0611 09:03:24.751794  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.751878  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.751888  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.751915  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.751961  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.752168  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.752260  109206 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0611 09:03:24.752411  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.752466  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.752476  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.752503  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.752539  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.752559  109206 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0611 09:03:24.752623  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.752762  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.752990  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.753123  109206 store.go:1343] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0611 09:03:24.753271  109206 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.753328  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.753338  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.753414  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.753464  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.753497  109206 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0611 09:03:24.753656  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.753720  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.753994  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.754106  109206 store.go:1343] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0611 09:03:24.754265  109206 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.754328  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.754337  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.754384  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.754447  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.754542  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.754534  109206 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0611 09:03:24.754879  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.755434  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.755640  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.755532  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.755765  109206 store.go:1343] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0611 09:03:24.755893  109206 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0611 09:03:24.755904  109206 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.755969  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.755979  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.756007  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.756058  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.756336  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.756442  109206 store.go:1343] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0611 09:03:24.756568  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.756624  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.756636  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.756666  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.756707  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.756737  109206 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0611 09:03:24.756915  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.757194  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.757284  109206 store.go:1343] Monitoring deployments.apps count at <storage-prefix>//deployments
I0611 09:03:24.757451  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.757514  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.757523  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.757550  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.757624  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.757666  109206 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0611 09:03:24.757862  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.758120  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.758219  109206 store.go:1343] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0611 09:03:24.758374  109206 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.758443  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.758453  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.758482  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.758521  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.758552  109206 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0611 09:03:24.758748  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.758984  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.759090  109206 store.go:1343] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0611 09:03:24.759235  109206 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.759288  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.759300  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.759328  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.760887  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.760949  109206 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0611 09:03:24.761186  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.761440  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.761540  109206 store.go:1343] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0611 09:03:24.761566  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.761678  109206 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.761727  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.761748  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.761758  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.761793  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.761849  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.761878  109206 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0611 09:03:24.762058  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.762080  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.762088  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.762538  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.762618  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.762543  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.762667  109206 store.go:1343] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0611 09:03:24.762694  109206 master.go:425] Enabling API group "apps".
I0611 09:03:24.762733  109206 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.762817  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.762838  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.762877  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.762928  109206 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0611 09:03:24.763064  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.763371  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.763399  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.763491  109206 store.go:1343] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0611 09:03:24.763518  109206 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.763578  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.763587  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.763621  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.763660  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.763677  109206 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0611 09:03:24.763777  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.764009  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.764089  109206 store.go:1343] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0611 09:03:24.764101  109206 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0611 09:03:24.764113  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.764138  109206 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.764164  109206 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0611 09:03:24.764294  109206 client.go:354] parsed scheme: ""
I0611 09:03:24.764305  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:24.764364  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:24.764417  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.764603  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.765003  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.765104  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:24.765425  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:24.765595  109206 store.go:1343] Monitoring events count at <storage-prefix>//events
I0611 09:03:24.765636  109206 master.go:425] Enabling API group "events.k8s.io".
I0611 09:03:24.765856  109206 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.766138  109206 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.766302  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.766441  109206 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0611 09:03:24.766458  109206 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.766651  109206 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.766801  109206 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.766925  109206 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.767143  109206 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.767270  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.767277  109206 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.767436  109206 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.767575  109206 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.768543  109206 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.768857  109206 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.769763  109206 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.770056  109206 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.770693  109206 watch_cache.go:405] Replace watchCache (rev: 24315) 
I0611 09:03:24.770978  109206 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.771284  109206 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.772183  109206 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.772523  109206 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.773295  109206 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.773649  109206 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 09:03:24.773740  109206 genericapiserver.go:351] Skipping API batch/v2alpha1 because it has no resources.
I0611 09:03:24.774510  109206 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.774683  109206 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.774935  109206 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.775770  109206 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.776584  109206 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.777461  109206 storage_factory.go:285] storing daemonsets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.777832  109206 storage_factory.go:285] storing daemonsets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.778549  109206 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.778723  109206 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.778970  109206 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.779334  109206 storage_factory.go:285] storing deployments.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.780077  109206 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.780529  109206 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.781370  109206 storage_factory.go:285] storing networkpolicies.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.782029  109206 storage_factory.go:285] storing podsecuritypolicies.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.782958  109206 storage_factory.go:285] storing replicasets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.783243  109206 storage_factory.go:285] storing replicasets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.783521  109206 storage_factory.go:285] storing replicasets.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.783614  109206 storage_factory.go:285] storing replicationcontrollers.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.783862  109206 storage_factory.go:285] storing replicationcontrollers.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.784718  109206 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.785631  109206 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.785911  109206 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.786699  109206 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 09:03:24.786844  109206 genericapiserver.go:351] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0611 09:03:24.787687  109206 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.787982  109206 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.788565  109206 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.789220  109206 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.789829  109206 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.790563  109206 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.791267  109206 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.791910  109206 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.792496  109206 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.793303  109206 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.793977  109206 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 09:03:24.794087  109206 genericapiserver.go:351] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0611 09:03:24.794729  109206 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.795429  109206 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 09:03:24.795520  109206 genericapiserver.go:351] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0611 09:03:24.796081  109206 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.796657  109206 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.796930  109206 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.797600  109206 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.798097  109206 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.798825  109206 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.799523  109206 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0611 09:03:24.799618  109206 genericapiserver.go:351] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0611 09:03:24.800763  109206 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.801606  109206 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.801913  109206 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.802656  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.803014  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.803358  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.804035  109206 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.804319  109206 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.804594  109206 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.805474  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.805782  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.806057  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.807053  109206 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.807827  109206 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.808147  109206 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.808850  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.809193  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.809498  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.810219  109206 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.810540  109206 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.810806  109206 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.811566  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.811985  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.812272  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.813251  109206 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.814038  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.814239  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.814507  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.814763  109206 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.815629  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.815922  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.816202  109206 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.816958  109206 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.817627  109206 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.818465  109206 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6a68fe46-908c-46f3-bb34-f4df41673a6c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0611 09:03:24.820582  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:24.820614  109206 healthz.go:161] healthz check poststarthook/bootstrap-controller failed: not finished
I0611 09:03:24.820624  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:24.820636  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:24.820645  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:24.820658  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:24.820825  109206 wrap.go:47] GET /healthz: (350.307µs) 500
goroutine 26526 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005f2c850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005f2c850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00deb9a60, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d5ca798, 0xc00b1be000, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53700)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d5ca798, 0xc00ac53700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005aaac60, 0xc00f061d40, 0x74261e0, 0xc00d5ca798, 0xc00ac53700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59254]
I0611 09:03:24.821953  109206 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.503954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59256]
I0611 09:03:24.825069  109206 wrap.go:47] GET /api/v1/services: (1.600898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59256]
I0611 09:03:24.829409  109206 wrap.go:47] GET /api/v1/services: (1.115548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59256]
I0611 09:03:24.831695  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:24.831726  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:24.831740  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:24.831749  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:24.831765  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:24.831902  109206 wrap.go:47] GET /healthz: (311.592µs) 500
goroutine 26908 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b21a7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b21a7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d2fb240, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc011404b60, 0xc00c962780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc011404b60, 0xc005c05100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc011404b60, 0xc005c05000)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc011404b60, 0xc005c05000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0048978c0, 0xc00f061d40, 0x74261e0, 0xc011404b60, 0xc005c05000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:24.833943  109206 wrap.go:47] GET /api/v1/services: (1.309936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:24.835031  109206 wrap.go:47] GET /api/v1/services: (897.378µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:24.835300  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.612666ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59256]
I0611 09:03:24.837750  109206 wrap.go:47] POST /api/v1/namespaces: (1.851369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:24.839327  109206 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.269298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:24.840940  109206 wrap.go:47] POST /api/v1/namespaces: (1.295028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:24.842132  109206 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (937.313µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:24.844398  109206 wrap.go:47] POST /api/v1/namespaces: (1.811428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:24.921687  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:24.921728  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:24.921742  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:24.921752  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:24.921759  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:24.921950  109206 wrap.go:47] GET /healthz: (453.961µs) 500
goroutine 26931 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0042c9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0042c9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dcc3c40, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc010efa8f0, 0xc00bac0780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc010efa8f0, 0xc004753100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc010efa8f0, 0xc004752e00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc010efa8f0, 0xc004752e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0057a9c20, 0xc00f061d40, 0x74261e0, 0xc010efa8f0, 0xc004752e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59254]
I0611 09:03:24.933789  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:24.933831  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:24.933845  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:24.933855  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:24.933863  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:24.934017  109206 wrap.go:47] GET /healthz: (409.89µs) 500
goroutine 26888 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b20f570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b20f570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d31c640, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0027e1050, 0xc003444480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0027e1050, 0xc006332900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0027e1050, 0xc006332800)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0027e1050, 0xc006332800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003425bc0, 0xc00f061d40, 0x74261e0, 0xc0027e1050, 0xc006332800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:25.021808  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.021917  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.021943  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.021962  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.022009  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.022219  109206 wrap.go:47] GET /healthz: (670.797µs) 500
goroutine 26890 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b20f6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b20f6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d31c6e0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0027e1058, 0xc003444a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0027e1058, 0xc006332d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0027e1058, 0xc006332c00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0027e1058, 0xc006332c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003425e60, 0xc00f061d40, 0x74261e0, 0xc0027e1058, 0xc006332c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59254]
I0611 09:03:25.033745  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.033863  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.033892  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.033928  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.033960  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.034202  109206 wrap.go:47] GET /healthz: (599.395µs) 500
goroutine 26892 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b20f810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b20f810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d31c780, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0027e1060, 0xc003445080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0027e1060, 0xc006333100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0027e1060, 0xc006333000)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0027e1060, 0xc006333000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001ec20c0, 0xc00f061d40, 0x74261e0, 0xc0027e1060, 0xc006333000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:25.138218  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.138262  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.138274  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.138282  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.138290  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.138484  109206 wrap.go:47] GET /healthz: (416.112µs) 500
goroutine 26894 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b20f960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b20f960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d31c960, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0027e1088, 0xc003445980, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0027e1088, 0xc006333700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0027e1088, 0xc006333600)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0027e1088, 0xc006333600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001ec2b40, 0xc00f061d40, 0x74261e0, 0xc0027e1088, 0xc006333600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:25.138754  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.138788  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.138803  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.138818  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.138832  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.138952  109206 wrap.go:47] GET /healthz: (286.492µs) 500
goroutine 26896 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b20fab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b20fab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d31ca00, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0027e1090, 0xc00b3c4000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0027e1090, 0xc004180400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0027e1090, 0xc004180200)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0027e1090, 0xc004180200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001ec2d80, 0xc00f061d40, 0x74261e0, 0xc0027e1090, 0xc004180200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:25.221668  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.221733  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.221747  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.221756  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.221764  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.221926  109206 wrap.go:47] GET /healthz: (475.737µs) 500
goroutine 26875 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00660bea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00660bea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d23ec40, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d1fa828, 0xc002fa8900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d1fa828, 0xc00439e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d1fa828, 0xc00439e200)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d1fa828, 0xc00439e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001f98fc0, 0xc00f061d40, 0x74261e0, 0xc00d1fa828, 0xc00439e200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:25.233716  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.233758  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.233770  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.233780  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.233788  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.234006  109206 wrap.go:47] GET /healthz: (431.233µs) 500
goroutine 26877 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b22e070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b22e070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d23ed00, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d1fa830, 0xc002fa8f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d1fa830, 0xc00439ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d1fa830, 0xc00439eb00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d1fa830, 0xc00439eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001f99560, 0xc00f061d40, 0x74261e0, 0xc00d1fa830, 0xc00439eb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.321724  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.321821  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.321846  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.321878  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.321898  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.322094  109206 wrap.go:47] GET /healthz: (489.602µs) 500
goroutine 26933 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0042c9880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0042c9880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d22e300, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc010efa938, 0xc00bac1500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc010efa938, 0xc004753b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc010efa938, 0xc004753a00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc010efa938, 0xc004753a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0057a9f80, 0xc00f061d40, 0x74261e0, 0xc010efa938, 0xc004753a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:25.333786  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.333889  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.333916  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.333936  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.333986  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.334192  109206 wrap.go:47] GET /healthz: (639.956µs) 500
goroutine 26912 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b21ad20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b21ad20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d22a380, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc011404c28, 0xc00c963200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc011404c28, 0xc0066c8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc011404c28, 0xc0066c8c00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc011404c28, 0xc0066c8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0032a4360, 0xc00f061d40, 0x74261e0, 0xc011404c28, 0xc0066c8c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.421589  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.421633  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.421645  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.421654  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.421661  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.421888  109206 wrap.go:47] GET /healthz: (437.532µs) 500
goroutine 26879 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b22e1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b22e1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d23f1a0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d1fa878, 0xc002fa9800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d1fa878, 0xc00439fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d1fa878, 0xc00439f700)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d1fa878, 0xc00439f700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001f99b60, 0xc00f061d40, 0x74261e0, 0xc00d1fa878, 0xc00439f700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:25.433660  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.433787  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.433828  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.433850  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.433868  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.434184  109206 wrap.go:47] GET /healthz: (656.789µs) 500
goroutine 26881 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b22e310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b22e310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d23f240, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d1fa880, 0xc002fa9e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d1fa880, 0xc00439ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d1fa880, 0xc00439fe00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d1fa880, 0xc00439fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001f99ce0, 0xc00f061d40, 0x74261e0, 0xc00d1fa880, 0xc00439fe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.521670  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.521712  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.521725  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.521735  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.521743  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.521920  109206 wrap.go:47] GET /healthz: (378.36µs) 500
goroutine 26946 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b20fc00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b20fc00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d31ce00, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0027e10c0, 0xc00b3c4900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0027e10c0, 0xc004180c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0027e10c0, 0xc004180b00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0027e10c0, 0xc004180b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001ec3140, 0xc00f061d40, 0x74261e0, 0xc0027e10c0, 0xc004180b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:25.533740  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.533779  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.533792  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.533801  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.533809  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.533944  109206 wrap.go:47] GET /healthz: (352.787µs) 500
goroutine 26948 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b20fd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b20fd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d31cf20, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0027e10e8, 0xc00b3c4f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0027e10e8, 0xc004181600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0027e10e8, 0xc004181200)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0027e10e8, 0xc004181200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001ec32c0, 0xc00f061d40, 0x74261e0, 0xc0027e10e8, 0xc004181200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.621680  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.621714  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.621726  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.621735  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.621743  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.621883  109206 wrap.go:47] GET /healthz: (356.047µs) 500
goroutine 26935 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0042c99d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0042c99d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d22e6a0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc010efa940, 0xc00b462180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc010efa940, 0xc004826200)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc010efa940, 0xc004826200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc010efa940, 0xc004826200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc010efa940, 0xc004826200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc010efa940, 0xc004826200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc010efa940, 0xc004826200)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc010efa940, 0xc004826200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc010efa940, 0xc004826200)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc010efa940, 0xc004826200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc010efa940, 0xc004826200)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc010efa940, 0xc004826200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc010efa940, 0xc004753f00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc010efa940, 0xc004753f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001aead80, 0xc00f061d40, 0x74261e0, 0xc010efa940, 0xc004753f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:25.633436  109206 client.go:354] parsed scheme: ""
I0611 09:03:25.633470  109206 client.go:354] scheme "" not registered, fallback to default scheme
I0611 09:03:25.633517  109206 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0611 09:03:25.633576  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:25.633651  109206 healthz.go:161] healthz check etcd failed: etcd client connection not yet established
I0611 09:03:25.633676  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.633689  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.633698  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.633707  109206 healthz.go:175] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.633915  109206 wrap.go:47] GET /healthz: (407.941µs) 500
goroutine 26922 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002b23c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002b23c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d2368e0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00e3911e0, 0xc00baf8d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00e3911e0, 0xc006601500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00e3911e0, 0xc006601400)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00e3911e0, 0xc006601400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0020bc2a0, 0xc00f061d40, 0x74261e0, 0xc00e3911e0, 0xc006601400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.633978  109206 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0611 09:03:25.634026  109206 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0611 09:03:25.722671  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.722702  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.722713  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.722720  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.722871  109206 wrap.go:47] GET /healthz: (1.435831ms) 500
goroutine 26924 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002b23dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002b23dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d236a80, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00e391208, 0xc004040c60, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00e391208, 0xc006601c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00e391208, 0xc006601b00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00e391208, 0xc006601b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0020bc9c0, 0xc00f061d40, 0x74261e0, 0xc00e391208, 0xc006601b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:25.734681  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.734717  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.734728  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.734736  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.734909  109206 wrap.go:47] GET /healthz: (1.40355ms) 500
goroutine 26926 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b234000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b234000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d236ee0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00e391228, 0xc0031ba420, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00e391228, 0xc0040b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00e391228, 0xc0040b0500)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00e391228, 0xc0040b0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0020bcd20, 0xc00f061d40, 0x74261e0, 0xc00e391228, 0xc0040b0500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.823337  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.823381  109206 healthz.go:161] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0611 09:03:25.823393  109206 healthz.go:161] healthz check poststarthook/ca-registration failed: not finished
I0611 09:03:25.823401  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0611 09:03:25.823553  109206 wrap.go:47] GET /healthz: (1.770508ms) 500
goroutine 26994 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005f2d420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005f2d420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d2c2c80, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d5ca828, 0xc000558dc0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d5ca828, 0xc00766e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d5ca828, 0xc00766e200)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d5ca828, 0xc00766e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005aab7a0, 0xc00f061d40, 0x74261e0, 0xc00d5ca828, 0xc00766e200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:25.823827  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.893856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:25.824105  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.129965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.824278  109206 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.875996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59264]
I0611 09:03:25.828270  109206 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.550791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59264]
I0611 09:03:25.828657  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.601367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:25.828829  109206 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (3.447367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.829324  109206 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0611 09:03:25.831897  109206 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.978844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.832087  109206 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.641156ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59264]
I0611 09:03:25.832233  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.511522ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:25.834834  109206 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.039323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59264]
I0611 09:03:25.835029  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.826254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:25.835264  109206 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0611 09:03:25.835281  109206 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0611 09:03:25.836123  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.836149  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:25.836334  109206 wrap.go:47] GET /healthz: (1.403164ms) 500
goroutine 26984 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b21b180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b21b180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d22aca0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc011404c90, 0xc003d30780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc011404c90, 0xc0066c9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc011404c90, 0xc0066c9b00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc011404c90, 0xc0066c9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0032a4ba0, 0xc00f061d40, 0x74261e0, 0xc011404c90, 0xc0066c9b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.837111  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.038441ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59254]
I0611 09:03:25.838547  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (819.989µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.839912  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (769.193µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.841119  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (940.689µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.842306  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (906.229µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.843591  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.006659ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.845733  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.835064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.845918  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0611 09:03:25.846862  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (758.637µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.848655  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.48759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.848841  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0611 09:03:25.849762  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (789.038µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.851571  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.851773  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0611 09:03:25.852850  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (943.373µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.854625  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.464364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.854814  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0611 09:03:25.855717  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (771.376µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.857673  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.513687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.857914  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0611 09:03:25.858955  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (832.545µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.860821  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.529493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.861040  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0611 09:03:25.862075  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (820.489µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.864840  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.361774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.865109  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0611 09:03:25.866457  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (992.419µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.868419  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.4621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.868667  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0611 09:03:25.870036  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.13856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.872561  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.961684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.872978  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0611 09:03:25.874220  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (960.526µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.876806  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.024875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.877254  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0611 09:03:25.878933  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.183304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.881289  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.859481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.881528  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0611 09:03:25.882824  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.089169ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.885322  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.064405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.885715  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0611 09:03:25.886799  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (912.41µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.888730  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.605592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.888961  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0611 09:03:25.890183  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (972.721µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.892236  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.580043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.892450  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0611 09:03:25.893414  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (799.73µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.895163  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.439561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.895433  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0611 09:03:25.896594  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (938.828µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.898456  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.422099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.898819  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0611 09:03:25.899980  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (907.128µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.901960  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.902181  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0611 09:03:25.903130  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (729.175µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.905184  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.905414  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0611 09:03:25.906737  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (983.982µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.908963  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.737394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.909297  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0611 09:03:25.910336  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (821.488µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.912450  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.69325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.912912  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0611 09:03:25.914221  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.131183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.916391  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.716589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.916575  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0611 09:03:25.917673  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (917.908µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.920252  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.372324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.920498  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0611 09:03:25.921571  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (827.233µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.922367  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.922397  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:25.922587  109206 wrap.go:47] GET /healthz: (1.232609ms) 500
goroutine 27158 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b2c5180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b2c5180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009615f00, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc011107b58, 0xc0062728c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc011107b58, 0xc003f2c900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc011107b58, 0xc003f2c800)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc011107b58, 0xc003f2c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0042078c0, 0xc00f061d40, 0x74261e0, 0xc011107b58, 0xc003f2c800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:25.924002  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.738479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.924254  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0611 09:03:25.925511  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.029691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.927649  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.745952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.927833  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0611 09:03:25.929123  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.086756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.930963  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.479461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.931262  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0611 09:03:25.933760  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (2.223523ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.934473  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:25.934497  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:25.934648  109206 wrap.go:47] GET /healthz: (1.241534ms) 500
goroutine 27126 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b2af5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b2af5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ce8bd40, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00e391a28, 0xc000077a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00e391a28, 0xc003e92800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00e391a28, 0xc003e92700)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00e391a28, 0xc003e92700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0041586c0, 0xc00f061d40, 0x74261e0, 0xc00e391a28, 0xc003e92700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:25.935880  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.767504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.936168  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0611 09:03:25.937300  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (980.826µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.939556  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.939764  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0611 09:03:25.940873  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (868.751µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.942931  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.728958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.943313  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0611 09:03:25.944255  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (738.822µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.946003  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.344039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.946242  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0611 09:03:25.947473  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.066447ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.949523  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.682885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.949773  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0611 09:03:25.950734  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (794.54µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.966423  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (15.331448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.966715  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0611 09:03:25.968230  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.343992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.970546  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.924418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.970755  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0611 09:03:25.971963  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.079025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.973919  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.659793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.974100  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0611 09:03:25.975482  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.255939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.977415  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.585709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.977609  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0611 09:03:25.980130  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (2.352822ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.982264  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.735403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.982507  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0611 09:03:25.983547  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (824.965µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.987251  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.987480  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0611 09:03:25.988562  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (867.018µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.990646  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.416886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.990861  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0611 09:03:25.991840  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (782.158µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.993477  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.179945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.993626  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0611 09:03:25.994521  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (747.004µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.996016  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.136595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.996480  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0611 09:03:25.997441  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (733.225µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.999181  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.367864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:25.999436  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0611 09:03:26.000355  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (710.805µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.002113  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.371373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.002414  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0611 09:03:26.003357  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (717.848µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.005337  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.625865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.005541  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0611 09:03:26.006554  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (840.496µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.008108  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.313709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.008355  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0611 09:03:26.009411  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (843.901µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.011201  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.388055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.012119  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0611 09:03:26.013137  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (830.909µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.015204  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.62992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.015393  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0611 09:03:26.016393  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (810.14µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.018414  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.018576  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0611 09:03:26.019678  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (829.968µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.021310  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.33125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.021657  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0611 09:03:26.022124  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.022149  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.022749  109206 wrap.go:47] GET /healthz: (1.451737ms) 500
goroutine 27225 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b343a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b343a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00856da00, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc004a0a740, 0xc003d30f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc004a0a740, 0xc007ea2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc004a0a740, 0xc007ea2100)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc004a0a740, 0xc007ea2100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0068a43c0, 0xc00f061d40, 0x74261e0, 0xc004a0a740, 0xc007ea2100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:26.023019  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (979.225µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.024925  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.509902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.025107  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0611 09:03:26.026151  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (819.598µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.028080  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.566199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.028327  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0611 09:03:26.029310  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (763.779µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.030990  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.267662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.031186  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0611 09:03:26.032167  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (766.405µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.034099  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.034116  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.034243  109206 wrap.go:47] GET /healthz: (857.555µs) 500
goroutine 27249 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3b0a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3b0a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00824a480, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0027e1a08, 0xc001579cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0027e1a08, 0xc007822c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0027e1a08, 0xc007822b00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0027e1a08, 0xc007822b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0071d1aa0, 0xc00f061d40, 0x74261e0, 0xc0027e1a08, 0xc007822b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.042823  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.12469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.043071  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0611 09:03:26.062310  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.58124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.083113  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.337414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.083754  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0611 09:03:26.102866  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (2.134864ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.122509  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.122638  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.122978  109206 wrap.go:47] GET /healthz: (1.5535ms) 500
goroutine 27320 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3bd2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3bd2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc004409be0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc004a0b9f0, 0xc006273180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75b00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc004a0b9f0, 0xc006f75b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a1bfc20, 0xc00f061d40, 0x74261e0, 0xc004a0b9f0, 0xc006f75b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:26.123694  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.880689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.124025  109206 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0611 09:03:26.134825  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.134927  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.135226  109206 wrap.go:47] GET /healthz: (1.631886ms) 500
goroutine 27322 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3bd3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3bd3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c623c0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc004a0ba88, 0xc002638780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0700)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc004a0ba88, 0xc0070d0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a5d08a0, 0xc00f061d40, 0x74261e0, 0xc004a0ba88, 0xc0070d0700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.142250  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.566491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.163188  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.473257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.163621  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0611 09:03:26.182276  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.509575ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.203440  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.742472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.203794  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0611 09:03:26.221981  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.306392ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.222131  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.222168  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.222334  109206 wrap.go:47] GET /healthz: (1.022478ms) 500
goroutine 27213 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b336d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b336d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00316e820, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc011405b78, 0xc002638c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc011405b78, 0xc0079fd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc011405b78, 0xc0079fce00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc011405b78, 0xc0079fce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0063537a0, 0xc00f061d40, 0x74261e0, 0xc011405b78, 0xc0079fce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:26.235018  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.235062  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.235276  109206 wrap.go:47] GET /healthz: (1.750111ms) 500
goroutine 27329 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3bd9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3bd9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d06d60, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc004a0bdc8, 0xc00432e780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc004a0bdc8, 0xc0069eea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc004a0bdc8, 0xc0069ee100)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc004a0bdc8, 0xc0069ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a9c5ec0, 0xc00f061d40, 0x74261e0, 0xc004a0bdc8, 0xc0069ee100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.243155  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.429191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.243461  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0611 09:03:26.262022  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.342158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.283640  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.964192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.284000  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0611 09:03:26.302409  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.717342ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.323120  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.453698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.323557  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0611 09:03:26.323700  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.323844  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.324047  109206 wrap.go:47] GET /healthz: (2.614889ms) 500
goroutine 27349 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3bdf10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3bdf10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0032a37a0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0001769c0, 0xc006273900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0001769c0, 0xc006d04200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0001769c0, 0xc006d04100)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0001769c0, 0xc006d04100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ac56000, 0xc00f061d40, 0x74261e0, 0xc0001769c0, 0xc006d04100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:26.334798  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.334837  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.335097  109206 wrap.go:47] GET /healthz: (1.466108ms) 500
goroutine 27308 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3c6690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3c6690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0032e5aa0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d1fb860, 0xc0022c1e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7b00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d1fb860, 0xc0072b7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bb5ade0, 0xc00f061d40, 0x74261e0, 0xc00d1fb860, 0xc0072b7b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.342267  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.631446ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.362969  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.306369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.363208  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0611 09:03:26.382029  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.348126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.403016  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.340695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.403265  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0611 09:03:26.422830  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.114912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.424007  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.424034  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.424659  109206 wrap.go:47] GET /healthz: (2.309015ms) 500
goroutine 27312 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3c6bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3c6bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0031bd340, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d1fb910, 0xc002639400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d1fb910, 0xc004260d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d1fb910, 0xc004260c00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d1fb910, 0xc004260c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00aea4900, 0xc00f061d40, 0x74261e0, 0xc00d1fb910, 0xc004260c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:26.440240  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.440273  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.440493  109206 wrap.go:47] GET /healthz: (4.826819ms) 500
goroutine 27362 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b337810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b337810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00312d2e0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc011405d10, 0xc004472b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc011405d10, 0xc0079fdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc011405d10, 0xc0079fda00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc011405d10, 0xc0079fda00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006353e00, 0xc00f061d40, 0x74261e0, 0xc011405d10, 0xc0079fda00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.443810  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.942355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.444064  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0611 09:03:26.461856  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.241022ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.483110  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.408534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.483473  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0611 09:03:26.502220  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.510435ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.523137  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.523169  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.523336  109206 wrap.go:47] GET /healthz: (2.012455ms) 500
goroutine 27297 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3a9b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3a9b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0030d0000, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc008a19470, 0xc004473180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc008a19470, 0xc006b31f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc008a19470, 0xc006b31b00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc008a19470, 0xc006b31b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b118e40, 0xc00f061d40, 0x74261e0, 0xc008a19470, 0xc006b31b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:26.523644  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.834711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.523988  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0611 09:03:26.534907  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.534938  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.535101  109206 wrap.go:47] GET /healthz: (1.35524ms) 500
goroutine 27395 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3a9ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3a9ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0030d08c0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc008a19508, 0xc002639a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc008a19508, 0xc005ae2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc008a19508, 0xc005ae2800)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc008a19508, 0xc005ae2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b119260, 0xc00f061d40, 0x74261e0, 0xc008a19508, 0xc005ae2800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.544696  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (3.031019ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.568438  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.805431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.568837  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0611 09:03:26.582031  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.408144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.602841  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.157106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.603833  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0611 09:03:26.622029  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.331778ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.622522  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.622542  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.622724  109206 wrap.go:47] GET /healthz: (1.365988ms) 500
goroutine 27412 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b387dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b387dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003162b80, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d5cb3a0, 0xc002f02000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91400)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d5cb3a0, 0xc009c91400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b239680, 0xc00f061d40, 0x74261e0, 0xc00d5cb3a0, 0xc009c91400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:26.634294  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.634325  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.634501  109206 wrap.go:47] GET /healthz: (1.066073ms) 500
goroutine 27414 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b387ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b387ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003163040, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d5cb3d0, 0xc003d31a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d5cb3d0, 0xc009ecaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d5cb3d0, 0xc009eca600)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d5cb3d0, 0xc009eca600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b2841e0, 0xc00f061d40, 0x74261e0, 0xc00d5cb3d0, 0xc009eca600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.642918  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.310558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.643181  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0611 09:03:26.670799  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (10.145217ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.684210  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.588104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.684529  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0611 09:03:26.702774  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.154565ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.723464  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.685926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.723627  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.723646  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.723791  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0611 09:03:26.723828  109206 wrap.go:47] GET /healthz: (2.478576ms) 500
goroutine 27353 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3d5420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3d5420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002fff5a0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc000177ec0, 0xc001f6ab40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc000177ec0, 0xc004b10a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc000177ec0, 0xc004b10800)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc000177ec0, 0xc004b10800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00aff0720, 0xc00f061d40, 0x74261e0, 0xc000177ec0, 0xc004b10800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:26.734901  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.734936  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.735136  109206 wrap.go:47] GET /healthz: (1.373661ms) 500
goroutine 27032 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b256cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b256cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d189740, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d4ce5f8, 0xc0062d4640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0c00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d4ce5f8, 0xc005cd0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cabe600, 0xc00f061d40, 0x74261e0, 0xc00d4ce5f8, 0xc005cd0c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.742181  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.516711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.762901  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.763162  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0611 09:03:26.782634  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.782427ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.805146  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.444299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.805455  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0611 09:03:26.822420  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.790193ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:26.823408  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.823432  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.823992  109206 wrap.go:47] GET /healthz: (2.069848ms) 500
goroutine 27424 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b89eb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b89eb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002eb7e40, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d5cb600, 0xc0044737c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e200)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d5cb600, 0xc00bb8e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b44c600, 0xc00f061d40, 0x74261e0, 0xc00d5cb600, 0xc00bb8e200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:26.834720  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.834752  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.834936  109206 wrap.go:47] GET /healthz: (1.410512ms) 500
goroutine 27358 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3d5810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3d5810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002d6e3e0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc002f720e0, 0xc00432edc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc002f720e0, 0xc00ba68500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc002f720e0, 0xc00ba68400)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc002f720e0, 0xc00ba68400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009a1cd20, 0xc00f061d40, 0x74261e0, 0xc002f720e0, 0xc00ba68400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.842856  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.225415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.843074  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0611 09:03:26.862305  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.60598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.883114  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.421901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.883392  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0611 09:03:26.902206  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.507133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.923062  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.923215  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.923424  109206 wrap.go:47] GET /healthz: (2.031526ms) 500
goroutine 27452 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b89fe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b89fe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002c70ea0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d5cb8b0, 0xc002f02b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a800)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d5cb8b0, 0xc00de0a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d592a20, 0xc00f061d40, 0x74261e0, 0xc00d5cb8b0, 0xc00de0a800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:26.923518  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.794686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.923724  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0611 09:03:26.934680  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:26.934713  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:26.934906  109206 wrap.go:47] GET /healthz: (1.350486ms) 500
goroutine 27360 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3d5d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3d5d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002d6eac0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc002f72348, 0xc00432f400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc002f72348, 0xc00ba69000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc002f72348, 0xc00ba68f00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc002f72348, 0xc00ba68f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009a1dc80, 0xc00f061d40, 0x74261e0, 0xc002f72348, 0xc00ba68f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.942051  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.383943ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.962903  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.264577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:26.963215  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0611 09:03:26.982200  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.438307ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.003046  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.291513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.003406  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0611 09:03:27.022115  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.432203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.022887  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.022914  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.023158  109206 wrap.go:47] GET /healthz: (1.235764ms) 500
goroutine 27433 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3efb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3efb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002bb74e0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc000bcae10, 0xc0062d4c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc000bcae10, 0xc00a65ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc000bcae10, 0xc00a65fe00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc000bcae10, 0xc00a65fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b938960, 0xc00f061d40, 0x74261e0, 0xc000bcae10, 0xc00a65fe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:27.034656  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.034692  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.035035  109206 wrap.go:47] GET /healthz: (1.508279ms) 500
goroutine 27435 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b3efc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b3efc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002bb7f20, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc000bcaee0, 0xc0038d0000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc000bcaee0, 0xc00df18600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc000bcaee0, 0xc00df18500)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc000bcaee0, 0xc00df18500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00be1c3c0, 0xc00f061d40, 0x74261e0, 0xc000bcaee0, 0xc00df18500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.042888  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.285166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.043278  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0611 09:03:27.062437  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.699104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.082985  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.217853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.083434  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0611 09:03:27.102097  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.381147ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.122656  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.122688  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.122872  109206 wrap.go:47] GET /healthz: (1.550125ms) 500
goroutine 27492 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00bbae620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00bbae620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002a6d380, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc000bcb230, 0xc00c9a88c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc000bcb230, 0xc00ed18300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc000bcb230, 0xc00ed18200)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc000bcb230, 0xc00ed18200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ca60fc0, 0xc00f061d40, 0x74261e0, 0xc000bcb230, 0xc00ed18200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:27.123026  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.324978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.123267  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0611 09:03:27.134650  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.134682  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.134883  109206 wrap.go:47] GET /healthz: (1.338423ms) 500
goroutine 27479 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00b40f730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00b40f730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002affe80, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d1fbec0, 0xc0038d0640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36cd00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d1fbec0, 0xc00e36cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bad4c00, 0xc00f061d40, 0x74261e0, 0xc00d1fbec0, 0xc00e36cd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.142281  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.566278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.162706  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.077152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.164218  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0611 09:03:27.182084  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.474672ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.203050  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.423758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.203320  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0611 09:03:27.222310  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.222439  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.222462  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.712867ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.222604  109206 wrap.go:47] GET /healthz: (1.208773ms) 500
goroutine 27486 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00be0c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00be0c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029f1b80, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d1fbf58, 0xc0038d0b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36da00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d1fbf58, 0xc00e36da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cbb80c0, 0xc00f061d40, 0x74261e0, 0xc00d1fbf58, 0xc00e36da00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:27.234583  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.234628  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.234794  109206 wrap.go:47] GET /healthz: (1.313205ms) 500
goroutine 27488 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00be0c1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00be0c1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029f1e40, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d1fbf68, 0xc002f032c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2000)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d1fbf68, 0xc002af2000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cbb8480, 0xc00f061d40, 0x74261e0, 0xc00d1fbf68, 0xc002af2000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.242460  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.868783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.242703  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0611 09:03:27.288230  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (27.574944ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.291031  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.127294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.291237  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0611 09:03:27.302279  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.111994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.322242  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.633875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.322443  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0611 09:03:27.323002  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.323038  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.323191  109206 wrap.go:47] GET /healthz: (1.755798ms) 500
goroutine 27511 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a557030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a557030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002913a20, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d5cbb48, 0xc0062d5180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73500)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d5cbb48, 0xc00eb73500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ccfa2a0, 0xc00f061d40, 0x74261e0, 0xc00d5cbb48, 0xc00eb73500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:27.334629  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.334665  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.334798  109206 wrap.go:47] GET /healthz: (1.262371ms) 500
goroutine 27463 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00bbddc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00bbddc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0029b4fa0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc002f73128, 0xc00432fcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc002f73128, 0xc00deb6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc002f73128, 0xc00deb6e00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc002f73128, 0xc00deb6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cd04960, 0xc00f061d40, 0x74261e0, 0xc002f73128, 0xc00deb6e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.342013  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.412403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.362854  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.125311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.363095  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0611 09:03:27.384926  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.525367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.402700  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.991822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.402909  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0611 09:03:27.422778  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.07857ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.422998  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.423032  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.423198  109206 wrap.go:47] GET /healthz: (1.759414ms) 500
goroutine 27496 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00bbaed20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00bbaed20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0027d2f60, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc000bcb4f0, 0xc00c9a9040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0b00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc000bcb4f0, 0xc00b3f0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cfc0840, 0xc00f061d40, 0x74261e0, 0xc000bcb4f0, 0xc00b3f0b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:27.434747  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.434790  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.434950  109206 wrap.go:47] GET /healthz: (1.432667ms) 500
goroutine 27541 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00be0c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00be0c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028a9260, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0029c6100, 0xc002f03900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0029c6100, 0xc002af3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0029c6100, 0xc002af3800)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0029c6100, 0xc002af3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cbb9da0, 0xc00f061d40, 0x74261e0, 0xc0029c6100, 0xc002af3800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.442673  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.998269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.442943  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0611 09:03:27.462132  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.470829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.482882  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.199237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.483178  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0611 09:03:27.502107  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.39796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.522727  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.522760  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.522927  109206 wrap.go:47] GET /healthz: (1.629191ms) 500
goroutine 27513 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00a557500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00a557500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002787080, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d5cbc48, 0xc0038d1540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77800)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d5cbc48, 0xc00da77800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ccfad80, 0xc00f061d40, 0x74261e0, 0xc00d5cbc48, 0xc00da77800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:27.523821  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.161245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.524150  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0611 09:03:27.534798  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.534830  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.535003  109206 wrap.go:47] GET /healthz: (1.443394ms) 500
goroutine 27560 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f576e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f576e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00120a340, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc002b62cb0, 0xc0038d1cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1b00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc002b62cb0, 0xc00b8a1b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d5a3860, 0xc00f061d40, 0x74261e0, 0xc002b62cb0, 0xc00b8a1b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.541964  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.316394ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.563560  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.88167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.563817  109206 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0611 09:03:27.582017  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.388333ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.583909  109206 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.38246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.602755  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.120241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.603005  109206 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0611 09:03:27.622338  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.622401  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.730124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.622406  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.622587  109206 wrap.go:47] GET /healthz: (1.172095ms) 500
goroutine 27570 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f590310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f590310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0005ffd80, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0005066c0, 0xc0028e0640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee300)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0005066c0, 0xc00f4ee300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ccfbec0, 0xc00f061d40, 0x74261e0, 0xc0005066c0, 0xc00f4ee300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:27.624266  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.489178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.634709  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.634742  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.634946  109206 wrap.go:47] GET /healthz: (1.32347ms) 500
goroutine 27572 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f590460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f590460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0000f17a0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc000506898, 0xc0062d5680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc000506898, 0xc00f4ee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc000506898, 0xc00f4ee800)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc000506898, 0xc00f4ee800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fc161e0, 0xc00f061d40, 0x74261e0, 0xc000506898, 0xc00f4ee800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.642785  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.162786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.643138  109206 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0611 09:03:27.662151  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.480852ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.663812  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.201657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.682660  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.013902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.682927  109206 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0611 09:03:27.702034  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.367133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.704041  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.396345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.722606  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.722641  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.722788  109206 wrap.go:47] GET /healthz: (1.381786ms) 500
goroutine 27551 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00be0db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00be0db90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000b18ba0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0029c6e48, 0xc003cbc780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27400)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0029c6e48, 0xc00fe27400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f4935c0, 0xc00f061d40, 0x74261e0, 0xc0029c6e48, 0xc00fe27400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:27.723728  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.051769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.724020  109206 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0611 09:03:27.734607  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.734640  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.734872  109206 wrap.go:47] GET /healthz: (1.379558ms) 500
goroutine 27576 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5909a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5909a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00003bae0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc000506b30, 0xc00c9a97c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc000506b30, 0xc00f4ef500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc000506b30, 0xc00f4ef300)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc000506b30, 0xc00f4ef300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fc167e0, 0xc00f061d40, 0x74261e0, 0xc000506b30, 0xc00f4ef300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.742270  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.623817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.744224  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.391322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.762837  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.126788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.763182  109206 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0611 09:03:27.782418  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.146173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.784850  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.711484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.802458  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.820608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.802712  109206 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0611 09:03:27.821984  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.333244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.822585  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.822621  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.822768  109206 wrap.go:47] GET /healthz: (1.3544ms) 500
goroutine 27607 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5b84d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5b84d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001daaf40, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d4ceb68, 0xc002a28640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77c00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d4ceb68, 0xc00fd77c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fdaa960, 0xc00f061d40, 0x74261e0, 0xc00d4ceb68, 0xc00fd77c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:27.824455  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.904243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.834631  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.834666  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.835269  109206 wrap.go:47] GET /healthz: (1.739379ms) 500
goroutine 27611 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5b8e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5b8e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001dabae0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d4cebb8, 0xc002a28b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d4cebb8, 0xc010032800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d4cebb8, 0xc010032700)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d4cebb8, 0xc010032700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fdaaea0, 0xc00f061d40, 0x74261e0, 0xc00d4cebb8, 0xc010032700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.842671  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.986561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.843910  109206 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
E0611 09:03:27.856403  109206 factory.go:711] Error getting pod permit-pluginf8f2d933-db94-4867-b360-482eb7f5772b/test-pod for retry: Get http://127.0.0.1:37011/api/v1/namespaces/permit-pluginf8f2d933-db94-4867-b360-482eb7f5772b/pods/test-pod: dial tcp 127.0.0.1:37011: connect: connection refused; retrying...
I0611 09:03:27.861635  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (947.458µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.864048  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.024556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.883220  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.449979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.883514  109206 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0611 09:03:27.901783  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.067527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.903485  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.256257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.922379  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.689542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:27.922618  109206 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0611 09:03:27.922678  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.922701  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.922852  109206 wrap.go:47] GET /healthz: (1.0822ms) 500
goroutine 27469 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5a4930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5a4930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0022e6300, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc002f73408, 0xc002a29040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc002f73408, 0xc00fe5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc002f73408, 0xc00fe5c500)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc002f73408, 0xc00fe5c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fc20480, 0xc00f061d40, 0x74261e0, 0xc002f73408, 0xc00fe5c500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59266]
I0611 09:03:27.934438  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:27.934474  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:27.934717  109206 wrap.go:47] GET /healthz: (1.103277ms) 500
goroutine 27651 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5cc380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5cc380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00118b5c0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc000507260, 0xc0028e0f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc000507260, 0xc00ff5fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc000507260, 0xc00ff5fb00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc000507260, 0xc00ff5fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fc17d40, 0xc00f061d40, 0x74261e0, 0xc000507260, 0xc00ff5fb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.941855  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.203673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.943823  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.550821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.962776  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.022458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.963099  109206 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0611 09:03:27.982098  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.370132ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:27.984072  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.492074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.003620  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.952653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.003858  109206 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0611 09:03:28.022129  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.446378ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.022419  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:28.022452  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:28.022606  109206 wrap.go:47] GET /healthz: (1.235306ms) 500
goroutine 27617 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5d49a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5d49a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0037389e0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc00d4cee80, 0xc0028e1400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9200)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc00d4cee80, 0xc010bf9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0109b6180, 0xc00f061d40, 0x74261e0, 0xc00d4cee80, 0xc010bf9200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:28.024084  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.437715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.034579  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:28.034610  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:28.034789  109206 wrap.go:47] GET /healthz: (1.256484ms) 500
goroutine 27677 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5da850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5da850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003742a40, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc002f73c68, 0xc0013c6140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc002f73c68, 0xc010d65000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc002f73c68, 0xc010d64f00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc002f73c68, 0xc010d64f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010a6c1e0, 0xc00f061d40, 0x74261e0, 0xc002f73c68, 0xc010d64f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.042530  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.878923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.042812  109206 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0611 09:03:28.062856  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (2.114401ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.064906  109206 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.500719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.082838  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.129523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.083100  109206 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0611 09:03:28.102050  109206 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.35917ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.103992  109206 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.407695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.122478  109206 healthz.go:161] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0611 09:03:28.122510  109206 healthz.go:175] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0611 09:03:28.122640  109206 wrap.go:47] GET /healthz: (1.255169ms) 500
goroutine 27643 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5e83f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5e83f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0037744a0, 0x1f4)
net/http.Error(0x7f41414b65b0, 0xc0029c7778, 0xc002a29540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
net/http.HandlerFunc.ServeHTTP(0xc00d3315a0, 0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc002e57ec0, 0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00edee150, 0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x43c0f1b, 0xe, 0xc010a6a630, 0xc00edee150, 0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
net/http.HandlerFunc.ServeHTTP(0xc010413d00, 0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
net/http.HandlerFunc.ServeHTTP(0xc00edba870, 0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
net/http.HandlerFunc.ServeHTTP(0xc010413d40, 0x7f41414b65b0, 0xc0029c7778, 0xc010e28000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f41414b65b0, 0xc0029c7778, 0xc00ff9bf00)
net/http.HandlerFunc.ServeHTTP(0xc0104a3f90, 0x7f41414b65b0, 0xc0029c7778, 0xc00ff9bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ffbf080, 0xc00f061d40, 0x74261e0, 0xc0029c7778, 0xc00ff9bf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:59258]
I0611 09:03:28.122751  109206 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.038547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.122977  109206 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0611 09:03:28.135047  109206 wrap.go:47] GET /healthz: (1.495482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.136736  109206 wrap.go:47] GET /api/v1/namespaces/default: (1.25581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.138852  109206 wrap.go:47] POST /api/v1/namespaces: (1.692565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.140254  109206 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.037089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.144726  109206 wrap.go:47] POST /api/v1/namespaces/default/services: (4.031775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.149151  109206 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.724955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.155935  109206 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (3.081178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.222552  109206 wrap.go:47] GET /healthz: (1.050566ms) 200 [Go-http-client/1.1 127.0.0.1:59266]
W0611 09:03:28.223573  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 09:03:28.223624  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 09:03:28.223655  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 09:03:28.223666  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 09:03:28.223677  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 09:03:28.223689  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 09:03:28.223698  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 09:03:28.223715  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 09:03:28.223725  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0611 09:03:28.223737  109206 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0611 09:03:28.223804  109206 factory.go:345] Creating scheduler from algorithm provider 'DefaultProvider'
I0611 09:03:28.223815  109206 factory.go:433] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0611 09:03:28.224005  109206 controller_utils.go:1029] Waiting for caches to sync for scheduler controller
I0611 09:03:28.224221  109206 reflector.go:122] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:223
I0611 09:03:28.224236  109206 reflector.go:160] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:223
I0611 09:03:28.225443  109206 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (708.604µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:28.226369  109206 get.go:250] Starting watch for /api/v1/pods, rv=24315 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=8m44s
I0611 09:03:28.324188  109206 shared_informer.go:176] caches populated
I0611 09:03:28.324221  109206 controller_utils.go:1036] Caches are synced for scheduler controller
I0611 09:03:28.324595  109206 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.324623  109206 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.324613  109206 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.324647  109206 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.324957  109206 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.324975  109206 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325034  109206 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325048  109206 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325280  109206 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325297  109206 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325332  109206 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325386  109206 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325447  109206 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325459  109206 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325676  109206 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.325695  109206 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.326583  109206 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (500.287µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59640]
I0611 09:03:28.326584  109206 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (600.696µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:28.326733  109206 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (472.85µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59628]
I0611 09:03:28.326916  109206 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (846.516µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59632]
I0611 09:03:28.326951  109206 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (297.237µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59634]
I0611 09:03:28.327179  109206 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (353.919µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59630]
I0611 09:03:28.327513  109206 get.go:250] Starting watch for /api/v1/nodes, rv=24315 labels= fields= timeout=9m51s
I0611 09:03:28.327659  109206 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (403.145µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59636]
I0611 09:03:28.327718  109206 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (1.17231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59638]
I0611 09:03:28.327770  109206 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=24315 labels= fields= timeout=8m28s
I0611 09:03:28.327805  109206 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=24315 labels= fields= timeout=8m2s
I0611 09:03:28.328108  109206 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=24315 labels= fields= timeout=6m34s
I0611 09:03:28.328526  109206 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=24315 labels= fields= timeout=9m33s
I0611 09:03:28.328542  109206 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=24315 labels= fields= timeout=6m52s
I0611 09:03:28.328552  109206 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=24315 labels= fields= timeout=6m48s
I0611 09:03:28.328792  109206 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=24315 labels= fields= timeout=6m30s
I0611 09:03:28.328876  109206 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.328891  109206 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0611 09:03:28.329652  109206 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (592.432µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59644]
I0611 09:03:28.330460  109206 get.go:250] Starting watch for /api/v1/services, rv=24549 labels= fields= timeout=5m28s
I0611 09:03:28.424507  109206 shared_informer.go:176] caches populated
I0611 09:03:28.524760  109206 shared_informer.go:176] caches populated
I0611 09:03:28.625003  109206 shared_informer.go:176] caches populated
I0611 09:03:28.725310  109206 shared_informer.go:176] caches populated
I0611 09:03:28.825545  109206 shared_informer.go:176] caches populated
I0611 09:03:28.925759  109206 shared_informer.go:176] caches populated
I0611 09:03:29.025965  109206 shared_informer.go:176] caches populated
I0611 09:03:29.126168  109206 shared_informer.go:176] caches populated
I0611 09:03:29.226444  109206 shared_informer.go:176] caches populated
I0611 09:03:29.327253  109206 shared_informer.go:176] caches populated
I0611 09:03:29.327773  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:29.328134  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:29.329370  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:29.330237  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:29.330915  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:29.332649  109206 wrap.go:47] POST /api/v1/nodes: (2.051761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.335214  109206 wrap.go:47] PUT /api/v1/nodes/testnode/status: (2.106118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.337388  109206 wrap.go:47] POST /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods: (1.722745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.337738  109206 scheduling_queue.go:815] About to try and schedule pod node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pidpressure-fake-name
I0611 09:03:29.337751  109206 scheduler.go:456] Attempting to schedule pod: node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pidpressure-fake-name
I0611 09:03:29.337864  109206 scheduler_binder.go:256] AssumePodVolumes for pod "node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pidpressure-fake-name", node "testnode"
I0611 09:03:29.337878  109206 scheduler_binder.go:266] AssumePodVolumes for pod "node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0611 09:03:29.337924  109206 factory.go:727] Attempting to bind pidpressure-fake-name to testnode
I0611 09:03:29.340015  109206 wrap.go:47] POST /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name/binding: (1.841948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.340213  109206 scheduler.go:593] pod node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pidpressure-fake-name is bound successfully on node testnode, 1 nodes evaluated, 1 nodes were found feasible
I0611 09:03:29.342805  109206 wrap.go:47] POST /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/events: (2.2666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.440522  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.469388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.539718  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.619994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.639730  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.660469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.753092  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (15.04939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.839922  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.840122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:29.939616  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.580095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.040062  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.730275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.140029  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.915566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.239979  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.010105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.327940  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:30.328279  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:30.329572  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:30.330412  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:30.331036  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:30.339898  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.841034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.439552  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.52416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.539722  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.749173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.639601  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.510669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.739485  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.414379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.839670  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.478043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:30.939782  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.712573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.039771  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.672818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.139860  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.749452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.239860  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.862697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.328114  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:31.328419  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:31.329985  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:31.330588  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:31.331178  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:31.340112  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.001917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.440216  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.157534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.541185  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (3.088795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.639869  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.79607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.740925  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.423096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.840082  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.903192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:31.939987  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.903318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.039661  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.667724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.139783  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.67965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.239729  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.687844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.329364  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:32.329479  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:32.330091  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:32.330729  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:32.331276  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:32.339631  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.688731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.439851  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.780503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.542814  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.791098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.644087  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.561714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.739875  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.825317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.840209  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.014683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:32.939962  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.883815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:33.039885  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.77559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:33.139675  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.61852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:33.239774  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.682956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:33.329582  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:33.329582  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:33.330219  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:33.330914  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:33.331426  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:33.340070  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.931078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:33.440026  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.919482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:33.531961  109206 event.go:249] Unable to write event: 'Patch http://127.0.0.1:37011/api/v1/namespaces/permit-pluginf8f2d933-db94-4867-b360-482eb7f5772b/events/test-pod.15a719766767ea0b: dial tcp 127.0.0.1:37011: connect: connection refused' (may retry after sleeping)
I0611 09:03:33.540288  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.220428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:33.640002  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.906561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:33.740051  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.963518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:33.839833  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.756038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:33.940216  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.048567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.039838  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.734081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.139671  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.626266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.240146  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.052602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.329784  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:34.329833  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:34.330396  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:34.331084  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:34.331524  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:34.339784  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.717549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.439764  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.721382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.539775  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.735705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.639884  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.807709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.739770  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.677746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.840027  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.918839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:34.939836  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.751492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.039910  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.827054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.139784  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.738029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.239856  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.810926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.329963  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:35.329963  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:35.330556  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:35.331247  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:35.331668  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:35.340084  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.701477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.439548  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.471797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.539880  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.816076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.639608  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.61831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.739972  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.960164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.839892  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.866558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:35.939989  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.860055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.039708  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.64661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.139716  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.702878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.239868  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.725246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.330205  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:36.330242  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:36.330684  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:36.331390  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:36.331776  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:36.339927  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.823382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.440018  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.92187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.539952  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.783304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.639873  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.356911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.739586  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.535581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.839835  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.794051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:36.939712  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.667431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.039645  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.621027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.139810  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.760085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.240268  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.794469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.330408  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:37.330459  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:37.330874  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:37.331546  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:37.331902  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:37.339962  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.465575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.446003  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (7.929406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.565843  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (27.800702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.640407  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.33809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.739919  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.85438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.839752  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.678163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:37.940067  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.000848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.040325  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.273588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.137475  109206 wrap.go:47] GET /api/v1/namespaces/default: (1.406486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.139529  109206 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.706046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.139704  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.055406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60844]
I0611 09:03:38.140997  109206 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.16443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.239789  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.82237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.330594  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:38.330707  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:38.331010  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:38.331700  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:38.332044  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:38.339521  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.539102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.439499  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.543813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.539740  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.706393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.639437  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.482372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.739614  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.579353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.839520  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.387234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:38.940048  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.060771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.039494  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.51746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.139377  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.336992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.239707  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.649326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.332428  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:39.332575  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:39.333425  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:39.333503  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:39.333523  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:39.339652  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.662059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.442228  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (4.186351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.539722  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.689289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.642307  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.385804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.739665  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.496687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.843382  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.449114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:39.939835  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.741685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:40.039711  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.56642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:40.139641  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.545912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:40.240985  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.839461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:40.332651  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:40.332791  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:40.333616  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:40.333636  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:40.333722  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:40.339698  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.726512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:40.440182  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.1923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:40.539483  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.429725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:40.639580  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.541734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:40.656992  109206 factory.go:711] Error getting pod permit-pluginf8f2d933-db94-4867-b360-482eb7f5772b/test-pod for retry: Get http://127.0.0.1:37011/api/v1/namespaces/permit-pluginf8f2d933-db94-4867-b360-482eb7f5772b/pods/test-pod: dial tcp 127.0.0.1:37011: connect: connection refused; retrying...
I0611 09:03:40.739586  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.542816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:40.839554  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.444045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:40.939648  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.588482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.039911  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.813068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.139967  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.88773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.239386  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.400374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.332882  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:41.332882  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:41.333783  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:41.333845  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:41.333858  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:41.339540  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.554719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.440951  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.891847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.540270  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.024485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.639969  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.630062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.740538  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.335674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.839825  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.79228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:41.939622  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.617817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.040807  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.557297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.139958  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.859253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.239878  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.811001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.333097  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:42.333106  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:42.333958  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:42.334015  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:42.334072  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:42.340038  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.969849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.439937  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.888628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.540228  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.218354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.639578  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.554438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.739731  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.684172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.839945  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.874602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:42.939677  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.612266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:43.040017  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.918356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:43.140046  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.992041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:43.239806  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.776732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:43.333295  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:43.333369  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:43.334111  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:43.334159  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:43.334213  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:43.339770  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.778346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:43.439790  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.634086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:43.532608  109206 event.go:249] Unable to write event: 'Patch http://127.0.0.1:37011/api/v1/namespaces/permit-pluginf8f2d933-db94-4867-b360-482eb7f5772b/events/test-pod.15a719766767ea0b: dial tcp 127.0.0.1:37011: connect: connection refused' (may retry after sleeping)
I0611 09:03:43.539714  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.644073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:43.639682  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.702979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:43.739905  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.810107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:43.839852  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.760327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:43.939626  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.602063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.039812  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.670797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.139709  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.6541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.239810  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.748122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.333505  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:44.333507  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:44.334241  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:44.334307  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:44.334331  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:44.339645  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.651936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.440019  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.973066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.539826  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.734915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.639692  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.658762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.739564  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.564631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.840110  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.052902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:44.939786  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.728731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.039714  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.636268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.139838  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.768894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.239227  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.234794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.334076  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:45.334083  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:45.334725  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:45.335398  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:45.335412  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:45.346252  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (8.139418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.440555  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.225713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.539762  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.722892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.639492  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.462204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.739796  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.755831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.839976  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.877725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:45.940196  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.117243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.040019  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.96386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.140113  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.057527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.239639  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.687465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.334243  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:46.334394  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:46.334887  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:46.335547  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:46.335683  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:46.339811  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.59986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.439821  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.711554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.539762  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.579458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.639683  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.543675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.739768  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.677128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.839646  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.624237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:46.949631  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (9.828898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:47.039964  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.859953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:47.140065  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.000347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:47.239807  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.772225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:47.315133  109206 factory.go:727] Attempting to bind signalling-pod to test-node-0
I0611 09:03:47.315611  109206 scheduler.go:424] Failed to bind pod: permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/signalling-pod
E0611 09:03:47.315639  109206 scheduler.go:426] scheduler cache ForgetPod failed: pod cf6d3312-77ba-4038-92cb-8be627bcd5e5 wasn't assumed so cannot be forgotten
E0611 09:03:47.315663  109206 factory.go:678] Error scheduling permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/signalling-pod: Post http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/pods/signalling-pod/binding: dial tcp 127.0.0.1:44963: connect: connection refused; retrying
I0611 09:03:47.315724  109206 factory.go:736] Updating pod condition for permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0611 09:03:47.316105  109206 factory.go:711] Error getting pod permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/signalling-pod for retry: Get http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/pods/signalling-pod: dial tcp 127.0.0.1:44963: connect: connection refused; retrying...
E0611 09:03:47.316131  109206 scheduler.go:588] error binding pod: Post http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/pods/signalling-pod/binding: dial tcp 127.0.0.1:44963: connect: connection refused
E0611 09:03:47.316187  109206 event.go:249] Unable to write event: 'Post http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/events: dial tcp 127.0.0.1:44963: connect: connection refused' (may retry after sleeping)
I0611 09:03:47.334436  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:47.334524  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:47.335052  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:47.335687  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:47.335773  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:47.339733  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.736752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:47.439775  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.715937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:47.516753  109206 factory.go:711] Error getting pod permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/signalling-pod for retry: Get http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/pods/signalling-pod: dial tcp 127.0.0.1:44963: connect: connection refused; retrying...
I0611 09:03:47.539829  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.731015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:47.640005  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.909458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:47.739991  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.896333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:47.841433  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (3.330911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:47.917338  109206 factory.go:711] Error getting pod permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/signalling-pod for retry: Get http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/pods/signalling-pod: dial tcp 127.0.0.1:44963: connect: connection refused; retrying...
I0611 09:03:47.939822  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.853431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.039976  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.819256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.138135  109206 wrap.go:47] GET /api/v1/namespaces/default: (1.702403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.139421  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.419782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60844]
I0611 09:03:48.140470  109206 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.401516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.141850  109206 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.040448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.240154  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.128586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.334680  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:48.334961  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:48.335191  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:48.335922  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:48.335977  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:48.339975  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.865529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.439864  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.810602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.539961  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.871163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.639783  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.670309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:48.717986  109206 factory.go:711] Error getting pod permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/signalling-pod for retry: Get http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/pods/signalling-pod: dial tcp 127.0.0.1:44963: connect: connection refused; retrying...
I0611 09:03:48.740068  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.996103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:48.746139  109206 event.go:249] Unable to write event: 'Post http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/events: dial tcp 127.0.0.1:44963: connect: connection refused' (may retry after sleeping)
I0611 09:03:48.839873  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.755818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:48.941775  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (3.694731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.041117  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.477798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.140000  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.876874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.240631  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.883521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.335538  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:49.335704  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:49.335821  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:49.336049  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:49.336111  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:49.339940  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.919786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.440409  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.032084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.539968  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.868198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.639898  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.827607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.740032  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.933154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.839735  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.637813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:49.941394  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.467622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:50.039987  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.811239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:50.139920  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.801253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:50.240032  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.882555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:50.318572  109206 factory.go:711] Error getting pod permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/signalling-pod for retry: Get http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/pods/signalling-pod: dial tcp 127.0.0.1:44963: connect: connection refused; retrying...
I0611 09:03:50.335657  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:50.335828  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:50.335971  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:50.336213  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:50.336215  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:50.340180  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.169915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:50.439973  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.902328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:50.540046  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.93338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:50.639771  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.730709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:50.740096  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.94522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:50.839806  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.685085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:50.940703  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.602525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.040080  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.898532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.139950  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.858516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.241711  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.176172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.335939  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:51.335996  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:51.336088  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:51.336400  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:51.340675  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.799395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.342319  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:51.444575  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.946739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.539970  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.838661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.640120  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.016157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.740099  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.041059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.839872  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.783668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:51.940098  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.012486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.042524  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (4.424021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.140022  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.896365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.242867  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.801206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.336229  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:52.336275  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:52.336288  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:52.336540  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:52.342714  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.840286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.343140  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:52.440018  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.807848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.539535  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.486131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.640415  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.060558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.741478  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.801618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.839841  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.787364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:52.940017  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.913863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:53.039935  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.846784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:53.139936  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.818031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:53.240085  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.993426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:53.336477  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:53.336536  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:53.336551  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:53.336736  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:53.339963  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.943496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:53.343297  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:53.440002  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.904582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:53.519196  109206 factory.go:711] Error getting pod permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/signalling-pod for retry: Get http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/pods/signalling-pod: dial tcp 127.0.0.1:44963: connect: connection refused; retrying...
E0611 09:03:53.533358  109206 event.go:249] Unable to write event: 'Patch http://127.0.0.1:37011/api/v1/namespaces/permit-pluginf8f2d933-db94-4867-b360-482eb7f5772b/events/test-pod.15a719766767ea0b: dial tcp 127.0.0.1:37011: connect: connection refused' (may retry after sleeping)
I0611 09:03:53.541367  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (3.23754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:53.639796  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.762469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:53.739861  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.817014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:53.840231  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.684274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:53.940435  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.277631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.039925  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.816545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.140675  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.763121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.241508  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (3.432948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.336683  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:54.336731  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:54.336745  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:54.336884  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:54.340027  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.540691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.343443  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:54.439861  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.799815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.540048  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.872603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.639950  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.936897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.739754  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.629831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.839680  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.665096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:54.939943  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.816684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.039836  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.724393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.139634  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.563883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.239761  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.538142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.336810  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:55.336893  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:55.336954  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:55.337080  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:55.340930  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.863971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.343602  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:55.439551  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.522477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.540728  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.520708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.639639  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.571349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.739625  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.571879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.839898  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.804681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:55.940064  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.98939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.039827  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.74899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.139966  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.695071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.241123  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.838374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.337025  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:56.337099  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:56.337115  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:56.337206  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:56.339891  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.863069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.343804  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:56.440071  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.763209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.539927  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.787291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.640175  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.117728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.740652  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.349332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.839742  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.649283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:56.939807  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.73064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.044791  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (6.665516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.139509  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.445555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.243336  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.838562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.337304  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:57.337312  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:57.337362  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:57.337367  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:57.339596  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.583557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.343968  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:57.439741  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.694409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.539736  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.68196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.639713  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.664892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.739728  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.715146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.839601  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.544454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:57.939872  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.864767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.039890  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.570567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.138779  109206 wrap.go:47] GET /api/v1/namespaces/default: (1.620564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.139612  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.432737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60844]
I0611 09:03:58.140692  109206 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.210802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.142666  109206 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.57509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.239829  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.799887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.337490  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:58.337545  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:58.337566  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:58.337566  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:58.339895  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.794127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.344523  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:58.440101  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.988751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.539897  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.803484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.640056  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.906273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.740254  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.158004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
E0611 09:03:58.746798  109206 event.go:249] Unable to write event: 'Post http://127.0.0.1:44963/api/v1/namespaces/permit-plugin9d4eba57-1c3d-4a8f-aa0b-109ee523ad52/events: dial tcp 127.0.0.1:44963: connect: connection refused' (may retry after sleeping)
I0611 09:03:58.840257  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.136351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:58.944178  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (2.151397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.042570  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.717066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.139618  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.544278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.239803  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.718281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.337690  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:59.337775  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:59.337800  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:59.337874  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:59.339382  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.382516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.341170  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.340591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.345134  109206 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0611 09:03:59.348072  109206 wrap.go:47] DELETE /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (6.346292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.350668  109206 wrap.go:47] GET /api/v1/namespaces/node-pid-pressure58fbe338-ded0-49f9-83c7-d3681f787565/pods/pidpressure-fake-name: (1.003026ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.351415  109206 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=24315&timeout=6m48s&timeoutSeconds=408&watch=true: (31.023535963s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59642]
E0611 09:03:59.351476  109206 scheduling_queue.go:818] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0611 09:03:59.351582  109206 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=24315&timeout=9m33s&timeoutSeconds=573&watch=true: (31.02329745s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59638]
I0611 09:03:59.351695  109206 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=24315&timeout=6m30s&timeoutSeconds=390&watch=true: (31.023412213s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59636]
I0611 09:03:59.351788  109206 wrap.go:47] GET /api/v1/nodes?resourceVersion=24315&timeout=9m51s&timeoutSeconds=591&watch=true: (31.024558079s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59258]
I0611 09:03:59.351790  109206 wrap.go:47] GET /api/v1/services?resourceVersion=24549&timeout=5m28s&timeoutSeconds=328&watch=true: (31.021584199s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59644]
I0611 09:03:59.351881  109206 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=24315&timeoutSeconds=524&watch=true: (31.125909715s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59266]
I0611 09:03:59.351997  109206 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=24315&timeout=8m2s&timeoutSeconds=482&watch=true: (31.024384239s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59634]
I0611 09:03:59.352127  109206 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=24315&timeout=8m28s&timeoutSeconds=508&watch=true: (31.024553712s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59632]
I0611 09:03:59.352217  109206 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=24315&timeout=6m34s&timeoutSeconds=394&watch=true: (31.024301976s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59628]
I0611 09:03:59.352318  109206 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=24315&timeout=6m52s&timeoutSeconds=412&watch=true: (31.023997549s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59630]
I0611 09:03:59.359270  109206 wrap.go:47] DELETE /api/v1/nodes: (7.30448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.359555  109206 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0611 09:03:59.360965  109206 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.163315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
I0611 09:03:59.363200  109206 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.870385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59646]
predicates_test.go:918: Test Failed: error, timed out waiting for the condition, while waiting for scheduled
				from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190611-085714.xml

Find permit-pluginf8f2d933-db94-4867-b360-482eb7f5772b/test-pod mentions in log files | View test history on testgrid


Show 1691 Passed Tests

Show 4 Skipped Tests