This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdamemi: Move PodPriorityResolution e2e to integration
ResultFAILURE
Tests 2 failed / 2836 succeeded
Started2019-08-27 13:12
Elapsed26m44s
Revision
Buildergke-prow-ssd-pool-1a225945-m2ml
Refs master:bcb464db
80824:4bea0c4f
pod2d63fb02-c8cc-11e9-9b30-3a38863a8680
infra-commit78cf77118
pod2d63fb02-c8cc-11e9-9b30-3a38863a8680
repok8s.io/kubernetes
repo-commit7f441f569fcb6eff0f92ac1036357471bd69125a
repos{u'k8s.io/kubernetes': u'master:bcb464db7ba2478429f4deff2c682f4efe46855e,80824:4bea0c4f24c61bca709fd345d6363f93b767aac7'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPodPriorityResolution 4.95s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPodPriorityResolution$
=== RUN   TestPodPriorityResolution
I0827 13:35:29.953037  108530 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0827 13:35:29.953069  108530 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0827 13:35:29.953081  108530 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0827 13:35:29.953093  108530 master.go:234] Using reconciler: 
I0827 13:35:29.955078  108530 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.955314  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.955401  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.955547  108530 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0827 13:35:29.955690  108530 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.956083  108530 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0827 13:35:29.956323  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.956436  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.957322  108530 store.go:1342] Monitoring events count at <storage-prefix>//events
I0827 13:35:29.957583  108530 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.957930  108530 watch_cache.go:405] Replace watchCache (rev: 33785) 
I0827 13:35:29.958133  108530 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0827 13:35:29.957944  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.958444  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.958666  108530 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0827 13:35:29.959153  108530 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.959629  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.959937  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.960281  108530 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0827 13:35:29.958975  108530 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0827 13:35:29.958980  108530 watch_cache.go:405] Replace watchCache (rev: 33786) 
I0827 13:35:29.960594  108530 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0827 13:35:29.961120  108530 watch_cache.go:405] Replace watchCache (rev: 33786) 
I0827 13:35:29.962230  108530 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.962417  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.962446  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.963015  108530 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0827 13:35:29.963288  108530 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.963945  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.964092  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.963087  108530 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0827 13:35:29.963666  108530 watch_cache.go:405] Replace watchCache (rev: 33786) 
I0827 13:35:29.964605  108530 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0827 13:35:29.964735  108530 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0827 13:35:29.964948  108530 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.965220  108530 watch_cache.go:405] Replace watchCache (rev: 33787) 
I0827 13:35:29.965399  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.965457  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.965616  108530 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0827 13:35:29.965716  108530 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0827 13:35:29.966216  108530 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.966396  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.966416  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.966456  108530 watch_cache.go:405] Replace watchCache (rev: 33787) 
I0827 13:35:29.968176  108530 watch_cache.go:405] Replace watchCache (rev: 33787) 
I0827 13:35:29.968792  108530 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0827 13:35:29.968920  108530 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0827 13:35:29.969372  108530 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.969942  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.970105  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.969681  108530 watch_cache.go:405] Replace watchCache (rev: 33788) 
I0827 13:35:29.971084  108530 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0827 13:35:29.971714  108530 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0827 13:35:29.972165  108530 watch_cache.go:405] Replace watchCache (rev: 33788) 
I0827 13:35:29.973602  108530 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.973832  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.973953  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.974141  108530 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0827 13:35:29.974420  108530 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0827 13:35:29.975130  108530 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.976018  108530 watch_cache.go:405] Replace watchCache (rev: 33790) 
I0827 13:35:29.976205  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.976359  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.976976  108530 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0827 13:35:29.977046  108530 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0827 13:35:29.977978  108530 watch_cache.go:405] Replace watchCache (rev: 33790) 
I0827 13:35:29.978738  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.979002  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.979107  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.979607  108530 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0827 13:35:29.979717  108530 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0827 13:35:29.980036  108530 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.980204  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.980362  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.980586  108530 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0827 13:35:29.980671  108530 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0827 13:35:29.980885  108530 watch_cache.go:405] Replace watchCache (rev: 33791) 
I0827 13:35:29.980832  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.981113  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.981269  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.982292  108530 watch_cache.go:405] Replace watchCache (rev: 33791) 
I0827 13:35:29.983593  108530 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0827 13:35:29.983648  108530 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0827 13:35:29.983636  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.984416  108530 watch_cache.go:405] Replace watchCache (rev: 33791) 
I0827 13:35:29.985556  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.985588  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.986867  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.986896  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.987089  108530 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.987314  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:29.987385  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:29.987526  108530 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0827 13:35:29.987776  108530 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0827 13:35:29.987943  108530 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.988080  108530 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.988963  108530 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.989588  108530 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.989799  108530 watch_cache.go:405] Replace watchCache (rev: 33792) 
I0827 13:35:29.990418  108530 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.990967  108530 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.991348  108530 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.991547  108530 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.991764  108530 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.992168  108530 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.992678  108530 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.992836  108530 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.993380  108530 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.993580  108530 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.994057  108530 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.994247  108530 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.994726  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.994934  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.995045  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.995127  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.995248  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.995329  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.995444  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.996118  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.996300  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.996931  108530 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.997505  108530 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.997765  108530 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.998031  108530 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.998657  108530 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.998910  108530 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:29.999585  108530 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.000191  108530 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.000709  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.001250  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.001428  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.001530  108530 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0827 13:35:30.001549  108530 master.go:434] Enabling API group "authentication.k8s.io".
I0827 13:35:30.001559  108530 master.go:434] Enabling API group "authorization.k8s.io".
I0827 13:35:30.001741  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.001935  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.001964  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.002113  108530 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0827 13:35:30.002201  108530 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0827 13:35:30.002239  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.002336  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.002355  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.002425  108530 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0827 13:35:30.002521  108530 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0827 13:35:30.002535  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.002617  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.002635  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.002740  108530 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0827 13:35:30.002757  108530 master.go:434] Enabling API group "autoscaling".
I0827 13:35:30.002794  108530 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0827 13:35:30.002938  108530 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.003085  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.003102  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.003221  108530 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0827 13:35:30.003321  108530 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.003390  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.003405  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.003488  108530 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0827 13:35:30.003504  108530 master.go:434] Enabling API group "batch".
I0827 13:35:30.003586  108530 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.003667  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.003682  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.003944  108530 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0827 13:35:30.004074  108530 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0827 13:35:30.004094  108530 master.go:434] Enabling API group "certificates.k8s.io".
I0827 13:35:30.004246  108530 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.004260  108530 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0827 13:35:30.004327  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.004338  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.004408  108530 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0827 13:35:30.004491  108530 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.004551  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.004562  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.004630  108530 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0827 13:35:30.004647  108530 master.go:434] Enabling API group "coordination.k8s.io".
I0827 13:35:30.004654  108530 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0827 13:35:30.005174  108530 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.005501  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.005523  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.005611  108530 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0827 13:35:30.005638  108530 master.go:434] Enabling API group "extensions".
I0827 13:35:30.005645  108530 watch_cache.go:405] Replace watchCache (rev: 33796) 
I0827 13:35:30.005752  108530 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.005967  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.005989  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.006071  108530 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0827 13:35:30.006181  108530 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.006258  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.006275  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.006389  108530 watch_cache.go:405] Replace watchCache (rev: 33796) 
I0827 13:35:30.006413  108530 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0827 13:35:30.006426  108530 master.go:434] Enabling API group "networking.k8s.io".
I0827 13:35:30.006449  108530 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.006508  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.006528  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.006614  108530 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0827 13:35:30.006635  108530 master.go:434] Enabling API group "node.k8s.io".
I0827 13:35:30.006660  108530 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0827 13:35:30.006752  108530 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.006988  108530 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0827 13:35:30.007368  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.007391  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.007460  108530 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0827 13:35:30.007478  108530 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0827 13:35:30.007565  108530 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.007631  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.007644  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.007709  108530 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0827 13:35:30.007719  108530 master.go:434] Enabling API group "policy".
I0827 13:35:30.007739  108530 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.007820  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.007829  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.007926  108530 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0827 13:35:30.007993  108530 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0827 13:35:30.008025  108530 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0827 13:35:30.008084  108530 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.007630  108530 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0827 13:35:30.008172  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.008190  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.008225  108530 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0827 13:35:30.008248  108530 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0827 13:35:30.008266  108530 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.008339  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.008355  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.008384  108530 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0827 13:35:30.008395  108530 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0827 13:35:30.008474  108530 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.008515  108530 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0827 13:35:30.008546  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.008555  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.008576  108530 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0827 13:35:30.008604  108530 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0827 13:35:30.008630  108530 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.008678  108530 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0827 13:35:30.008692  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.008704  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.008746  108530 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0827 13:35:30.008805  108530 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0827 13:35:30.008828  108530 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.008898  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.008915  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.008954  108530 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0827 13:35:30.009032  108530 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.009111  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.009126  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.009196  108530 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0827 13:35:30.009216  108530 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0827 13:35:30.009322  108530 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.009474  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.009522  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.009629  108530 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0827 13:35:30.009686  108530 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0827 13:35:30.010041  108530 watch_cache.go:405] Replace watchCache (rev: 33796) 
I0827 13:35:30.010140  108530 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0827 13:35:30.010669  108530 watch_cache.go:405] Replace watchCache (rev: 33796) 
I0827 13:35:30.010960  108530 watch_cache.go:405] Replace watchCache (rev: 33796) 
I0827 13:35:30.011079  108530 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0827 13:35:30.011577  108530 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0827 13:35:30.011660  108530 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.011749  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.011763  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.011928  108530 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0827 13:35:30.012001  108530 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0827 13:35:30.012061  108530 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.012150  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.012164  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.012220  108530 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0827 13:35:30.012229  108530 master.go:434] Enabling API group "scheduling.k8s.io".
I0827 13:35:30.012356  108530 master.go:423] Skipping disabled API group "settings.k8s.io".
I0827 13:35:30.012444  108530 watch_cache.go:405] Replace watchCache (rev: 33796) 
I0827 13:35:30.012450  108530 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.012577  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.012588  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.012648  108530 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0827 13:35:30.012804  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.012817  108530 watch_cache.go:405] Replace watchCache (rev: 33796) 
I0827 13:35:30.012897  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.012909  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.012981  108530 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0827 13:35:30.012998  108530 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.013031  108530 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0827 13:35:30.013064  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.013074  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.013106  108530 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0827 13:35:30.013113  108530 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0827 13:35:30.013127  108530 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.013178  108530 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0827 13:35:30.013212  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.013222  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.013278  108530 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0827 13:35:30.013295  108530 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0827 13:35:30.013366  108530 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.013400  108530 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0827 13:35:30.013481  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.013493  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.013542  108530 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0827 13:35:30.013606  108530 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0827 13:35:30.013615  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.013669  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.013678  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.013722  108530 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0827 13:35:30.013731  108530 master.go:434] Enabling API group "storage.k8s.io".
I0827 13:35:30.013804  108530 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.013895  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.013910  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.013910  108530 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0827 13:35:30.014034  108530 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0827 13:35:30.014089  108530 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0827 13:35:30.014135  108530 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.014195  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.014203  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.014266  108530 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0827 13:35:30.014334  108530 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0827 13:35:30.014351  108530 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.014421  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.014430  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.014486  108530 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0827 13:35:30.014557  108530 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.014596  108530 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0827 13:35:30.014610  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.014620  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.014678  108530 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0827 13:35:30.014695  108530 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0827 13:35:30.014838  108530 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.014966  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.014982  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.015072  108530 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0827 13:35:30.015082  108530 master.go:434] Enabling API group "apps".
I0827 13:35:30.015100  108530 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.015206  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.015218  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.015271  108530 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0827 13:35:30.015291  108530 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.015351  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.015363  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.015437  108530 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0827 13:35:30.015459  108530 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.015476  108530 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0827 13:35:30.015514  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.015525  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.015578  108530 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0827 13:35:30.015592  108530 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.015663  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.015674  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.015671  108530 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0827 13:35:30.015715  108530 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0827 13:35:30.015722  108530 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0827 13:35:30.015739  108530 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.015758  108530 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0827 13:35:30.015787  108530 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0827 13:35:30.015906  108530 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0827 13:35:30.015985  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.016002  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.016112  108530 store.go:1342] Monitoring events count at <storage-prefix>//events
I0827 13:35:30.016125  108530 master.go:434] Enabling API group "events.k8s.io".
I0827 13:35:30.016145  108530 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0827 13:35:30.016318  108530 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.016498  108530 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.016857  108530 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.016981  108530 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.017163  108530 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.017270  108530 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.017450  108530 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.017590  108530 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.017682  108530 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.017775  108530 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.017892  108530 watch_cache.go:405] Replace watchCache (rev: 33796) 
I0827 13:35:30.018656  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.018914  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.019553  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.019806  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.020561  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.020901  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.021581  108530 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.021770  108530 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.022552  108530 watch_cache.go:405] Replace watchCache (rev: 33797) 
I0827 13:35:30.022616  108530 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.022677  108530 watch_cache.go:405] Replace watchCache (rev: 33797) 
I0827 13:35:30.022818  108530 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:30.022888  108530 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0827 13:35:30.023499  108530 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.023649  108530 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.023936  108530 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.024689  108530 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.025445  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.025495  108530 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.026230  108530 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.026572  108530 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.026766  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.027233  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.027494  108530 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.028268  108530 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.028527  108530 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.029122  108530 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:30.029189  108530 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0827 13:35:30.029990  108530 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.030273  108530 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.030331  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.030802  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.030805  108530 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.031464  108530 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.031567  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.032046  108530 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.032071  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.032813  108530 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.033123  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.033612  108530 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.033931  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.034292  108530 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.034381  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.034651  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.034680  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.034761  108530 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.034833  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.034987  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.035112  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.035238  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.035489  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.035624  108530 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.036277  108530 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:30.036372  108530 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0827 13:35:30.036559  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.036702  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.036756  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.036896  108530 watch_cache.go:405] Replace watchCache (rev: 33798) 
I0827 13:35:30.036988  108530 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.037523  108530 watch_cache.go:405] Replace watchCache (rev: 33799) 
I0827 13:35:30.037575  108530 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:30.037636  108530 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0827 13:35:30.038302  108530 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.038855  108530 watch_cache.go:405] Replace watchCache (rev: 33799) 
I0827 13:35:30.038829  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.038952  108530 watch_cache.go:405] Replace watchCache (rev: 33799) 
I0827 13:35:30.039190  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.039350  108530 watch_cache.go:405] Replace watchCache (rev: 33799) 
I0827 13:35:30.039485  108530 watch_cache.go:405] Replace watchCache (rev: 33799) 
I0827 13:35:30.039560  108530 watch_cache.go:405] Replace watchCache (rev: 33799) 
I0827 13:35:30.039803  108530 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.040029  108530 watch_cache.go:405] Replace watchCache (rev: 33799) 
I0827 13:35:30.040079  108530 watch_cache.go:405] Replace watchCache (rev: 33799) 
I0827 13:35:30.040312  108530 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.040770  108530 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.041236  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:30.041287  108530 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0827 13:35:30.041986  108530 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.042587  108530 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.042764  108530 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.043237  108530 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.043433  108530 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.043592  108530 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.044098  108530 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.044272  108530 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.044479  108530 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.045008  108530 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.045187  108530 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.045398  108530 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:30.045442  108530 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0827 13:35:30.045447  108530 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0827 13:35:30.045942  108530 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.046346  108530 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.046888  108530 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.047354  108530 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.047951  108530 watch_cache.go:405] Replace watchCache (rev: 33800) 
I0827 13:35:30.048108  108530 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"71d3f8fe-e746-4d4b-9ba2-d3089ba91dc8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:30.050117  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.050142  108530 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0827 13:35:30.050150  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.050157  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.050163  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.050168  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.050189  108530 httplog.go:90] GET /healthz: (182.474µs) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:30.051056  108530 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.004813ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38374]
I0827 13:35:30.053089  108530 httplog.go:90] GET /api/v1/services: (914.702µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38374]
I0827 13:35:30.056675  108530 httplog.go:90] GET /api/v1/services: (872.655µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38374]
I0827 13:35:30.059788  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.059820  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.059865  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.059876  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.059884  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.059958  108530 httplog.go:90] GET /healthz: (377.001µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:30.061155  108530 httplog.go:90] GET /api/v1/services: (959.647µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.061325  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.077397ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38374]
I0827 13:35:30.063284  108530 httplog.go:90] GET /api/v1/services: (867.556µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:30.063560  108530 httplog.go:90] POST /api/v1/namespaces: (1.754929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.065704  108530 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.900557ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.067064  108530 httplog.go:90] POST /api/v1/namespaces: (973.308µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.068090  108530 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (811.652µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.069574  108530 httplog.go:90] POST /api/v1/namespaces: (1.266796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.070741  108530 cacher.go:763] cacher (*core.Namespace): 1 objects queued in incoming channel.
I0827 13:35:30.150996  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.151193  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.151322  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.151410  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.151462  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.151714  108530 httplog.go:90] GET /healthz: (951.782µs) 0 [Go-http-client/1.1 127.0.0.1:38376]
I0827 13:35:30.161657  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.161778  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.161870  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.161929  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.161969  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.162119  108530 httplog.go:90] GET /healthz: (597.25µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.251151  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.251190  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.251202  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.251212  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.251221  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.251262  108530 httplog.go:90] GET /healthz: (274.456µs) 0 [Go-http-client/1.1 127.0.0.1:38376]
I0827 13:35:30.261214  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.261364  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.261423  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.261468  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.261496  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.261603  108530 httplog.go:90] GET /healthz: (604.464µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.350957  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.351117  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.351233  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.351297  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.351344  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.351504  108530 httplog.go:90] GET /healthz: (702.654µs) 0 [Go-http-client/1.1 127.0.0.1:38376]
I0827 13:35:30.360815  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.360872  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.360885  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.360895  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.360903  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.360974  108530 httplog.go:90] GET /healthz: (258.498µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.450906  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.450937  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.450946  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.450953  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.450960  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.450991  108530 httplog.go:90] GET /healthz: (264.248µs) 0 [Go-http-client/1.1 127.0.0.1:38376]
I0827 13:35:30.460931  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.460966  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.460981  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.460991  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.460999  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.461059  108530 httplog.go:90] GET /healthz: (472.919µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.550968  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.551122  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.551173  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.551219  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.551267  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.551415  108530 httplog.go:90] GET /healthz: (604.807µs) 0 [Go-http-client/1.1 127.0.0.1:38376]
I0827 13:35:30.560707  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.560812  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.560960  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.561015  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.561078  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.561250  108530 httplog.go:90] GET /healthz: (677.29µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.651098  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.651250  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.651329  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.651405  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.651465  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.651659  108530 httplog.go:90] GET /healthz: (839.144µs) 0 [Go-http-client/1.1 127.0.0.1:38376]
I0827 13:35:30.660797  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.660832  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.660860  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.660869  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.660876  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.660915  108530 httplog.go:90] GET /healthz: (247.818µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.750818  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.750932  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.750949  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.750959  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.750966  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.750999  108530 httplog.go:90] GET /healthz: (320.493µs) 0 [Go-http-client/1.1 127.0.0.1:38376]
I0827 13:35:30.761497  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.761532  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.761542  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.761551  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.761559  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.761597  108530 httplog.go:90] GET /healthz: (237.071µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.851222  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.851262  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.851277  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.851287  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.851294  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.851323  108530 httplog.go:90] GET /healthz: (241.849µs) 0 [Go-http-client/1.1 127.0.0.1:38376]
I0827 13:35:30.860694  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.860732  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.860744  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.860753  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.860760  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.860792  108530 httplog.go:90] GET /healthz: (242.671µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:30.950991  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:30.951029  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.951041  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.951051  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.951059  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.951101  108530 httplog.go:90] GET /healthz: (306.017µs) 0 [Go-http-client/1.1 127.0.0.1:38376]
I0827 13:35:30.953786  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:30.953936  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:30.961696  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:30.961727  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:30.961735  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:30.961741  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:30.961788  108530 httplog.go:90] GET /healthz: (1.200828ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:31.051879  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.051922  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:31.051933  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:31.051941  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:31.051978  108530 httplog.go:90] GET /healthz: (1.185996ms) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:31.052030  108530 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.768073ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.052064  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.808797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38376]
I0827 13:35:31.052327  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.698849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38394]
I0827 13:35:31.053628  108530 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (960.826µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.053640  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (989.819µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.054694  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (685.382µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.055596  108530 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.591014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.055981  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (754.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38400]
I0827 13:35:31.056292  108530 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.344287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.056458  108530 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0827 13:35:31.057091  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (848.316µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38400]
I0827 13:35:31.057233  108530 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (646.294µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.058225  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (825.783µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38398]
I0827 13:35:31.058912  108530 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.066637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.059054  108530 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0827 13:35:31.059070  108530 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0827 13:35:31.060619  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (2.090584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38398]
I0827 13:35:31.061535  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.061623  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.062028  108530 httplog.go:90] GET /healthz: (1.414893ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.062162  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (752.078µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38398]
I0827 13:35:31.063651  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (735.605µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.064532  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (574.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.066072  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.114499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.066287  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0827 13:35:31.067186  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (739.098µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.068809  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.263727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.068994  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0827 13:35:31.069956  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (781.081µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.071539  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.162272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.071704  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0827 13:35:31.072673  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (746.348µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.074302  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.196289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.074493  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0827 13:35:31.076044  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (795.876µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.077867  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.39535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.078123  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0827 13:35:31.079233  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (925.325µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.080834  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.207567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.081081  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0827 13:35:31.082281  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (915.252µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.083808  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.149493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.083991  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0827 13:35:31.085093  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (883.591µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.086754  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.252837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.086951  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0827 13:35:31.087999  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (839.763µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.089927  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.090453  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0827 13:35:31.091835  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.113594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.093781  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.471906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.094236  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0827 13:35:31.095258  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (778.908µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.097058  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.429446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.097261  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0827 13:35:31.098489  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.064786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.100354  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.467521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.100927  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0827 13:35:31.103098  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.977638ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.104630  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.116182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.104939  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0827 13:35:31.105823  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (739.197µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.107230  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.034772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.107467  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0827 13:35:31.108292  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (642.046µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.110311  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.589121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.110573  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0827 13:35:31.111783  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (955.645µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.113810  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.114030  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0827 13:35:31.115057  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (825.321µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.116631  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.254447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.116951  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0827 13:35:31.117712  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (589.019µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.119289  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.188099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.119667  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0827 13:35:31.120627  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (681.395µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.123022  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.921988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.129169  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0827 13:35:31.130509  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.077276ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.132577  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.592755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.132787  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0827 13:35:31.133867  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (819.715µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.135748  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.461941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.136079  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0827 13:35:31.137204  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (909.768µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.139389  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.47926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.139559  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0827 13:35:31.140475  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (740.989µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.142313  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.512406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.142500  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0827 13:35:31.143716  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.008181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.145361  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.184701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.145600  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0827 13:35:31.147141  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (912.611µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.149431  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.889854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.149828  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0827 13:35:31.150989  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (919.304µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.151435  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.151461  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.151493  108530 httplog.go:90] GET /healthz: (837.515µs) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:31.153431  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.979586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.153726  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0827 13:35:31.155197  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.244323ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.158718  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.935232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.158983  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0827 13:35:31.160408  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.10047ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.161391  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.161479  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.161615  108530 httplog.go:90] GET /healthz: (1.149876ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.162357  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.448294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.162642  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0827 13:35:31.163467  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (667.365µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.164988  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.167342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.165140  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0827 13:35:31.166089  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (761.61µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.167729  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.305791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.167955  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0827 13:35:31.169345  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.198042ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.171266  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.486808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.171597  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0827 13:35:31.172683  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (693.112µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.174412  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.401231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.174667  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0827 13:35:31.175583  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (688.647µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.177336  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.282719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.177573  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0827 13:35:31.178927  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.202941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.180706  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.405502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.180958  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0827 13:35:31.181824  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (730.49µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.183457  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.262903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.183636  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0827 13:35:31.184479  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (710.6µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.186168  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.186327  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0827 13:35:31.187278  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (793.228µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.189802  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.818356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.190418  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0827 13:35:31.194943  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (4.279839ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.198810  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.361874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.200183  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0827 13:35:31.201824  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.340457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.204084  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.718074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.204368  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0827 13:35:31.205458  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (827.468µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.207416  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.580076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.207787  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0827 13:35:31.209062  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (936.396µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.211207  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.737646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.211560  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0827 13:35:31.212804  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (807.842µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.214635  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.267098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.214893  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0827 13:35:31.216051  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (889.48µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.217604  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.216107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.217792  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0827 13:35:31.218802  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (812.251µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.220425  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.247499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.220590  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0827 13:35:31.221446  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (685.586µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.223807  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.575242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.224011  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0827 13:35:31.225173  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (962.233µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.226970  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.449692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.227178  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0827 13:35:31.228329  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (910.206µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.230178  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.44464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.230367  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0827 13:35:31.231650  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.03663ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.233473  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.449791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.233646  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0827 13:35:31.234532  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (713.623µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.236617  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.586053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.236910  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0827 13:35:31.238008  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (747.833µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.239573  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.220976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.239818  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0827 13:35:31.251619  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.120828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.252064  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.252094  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.252133  108530 httplog.go:90] GET /healthz: (1.477093ms) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:31.261550  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.261582  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.261626  108530 httplog.go:90] GET /healthz: (988.707µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.272945  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.38088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.273282  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0827 13:35:31.292377  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.732831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.313528  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.866471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.313906  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0827 13:35:31.332153  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.411599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.352239  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.352280  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.352480  108530 httplog.go:90] GET /healthz: (1.633487ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:31.352938  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.438919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.353250  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0827 13:35:31.362308  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.362361  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.362415  108530 httplog.go:90] GET /healthz: (973.382µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.372122  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.698606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.395310  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.514473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.395649  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0827 13:35:31.411825  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.298532ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.432648  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.151356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.432912  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0827 13:35:31.451435  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.451557  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.451694  108530 httplog.go:90] GET /healthz: (1.048731ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:31.452034  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.509955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.461822  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.461999  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.462204  108530 httplog.go:90] GET /healthz: (1.289233ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.472975  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.996412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.473940  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0827 13:35:31.492185  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.608846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.512928  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.356332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.513229  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0827 13:35:31.531805  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.252858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.552330  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.552360  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.552401  108530 httplog.go:90] GET /healthz: (1.741109ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:31.552587  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.924787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.552923  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0827 13:35:31.561452  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.561487  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.561530  108530 httplog.go:90] GET /healthz: (927.451µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.571571  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.100258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.593101  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.359124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.593391  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0827 13:35:31.612589  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.059628ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.634475  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.131916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.634755  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0827 13:35:31.652755  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.652784  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.652825  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.413727ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.652827  108530 httplog.go:90] GET /healthz: (1.788792ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:31.661640  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.661674  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.661710  108530 httplog.go:90] GET /healthz: (1.199487ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.672238  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.741334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.672448  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0827 13:35:31.691756  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.233516ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.713966  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.741715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.714285  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0827 13:35:31.731393  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (927.223µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.752210  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.752239  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.752296  108530 httplog.go:90] GET /healthz: (1.384342ms) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:31.752334  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.887266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.752521  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
E0827 13:35:31.758610  108530 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37463/apis/events.k8s.io/v1beta1/namespaces/permit-pluginc09efd45-f39e-4575-9605-ccf7b9ec3485/events: dial tcp 127.0.0.1:37463: connect: connection refused' (may retry after sleeping)
I0827 13:35:31.761304  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.761335  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.761373  108530 httplog.go:90] GET /healthz: (849.421µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.771672  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.221333ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.792395  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.85352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.792653  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0827 13:35:31.811759  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.22461ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.833476  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.92604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.833777  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0827 13:35:31.851648  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.138764ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:31.852171  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.852206  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.852268  108530 httplog.go:90] GET /healthz: (969.014µs) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:31.861560  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.861591  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.861645  108530 httplog.go:90] GET /healthz: (762.456µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.873108  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.62103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.873432  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0827 13:35:31.892612  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.92309ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.913013  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.435301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.913336  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0827 13:35:31.933082  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.501446ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.952287  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.952319  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.952369  108530 httplog.go:90] GET /healthz: (1.652261ms) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:31.952691  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.21008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.952996  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0827 13:35:31.961604  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:31.961747  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:31.962073  108530 httplog.go:90] GET /healthz: (1.476394ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.971600  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.175169ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.992590  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.034791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:31.992814  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0827 13:35:32.012088  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.570385ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.032390  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.879399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.032743  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0827 13:35:32.051422  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.051704  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.051941  108530 httplog.go:90] GET /healthz: (1.337534ms) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:32.051594  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.153573ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.061900  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.061930  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.061972  108530 httplog.go:90] GET /healthz: (899.592µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.072664  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.147689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.072963  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0827 13:35:32.091803  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.205365ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.112990  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.503207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.113215  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0827 13:35:32.131595  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.107349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.152471  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.152504  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.152557  108530 httplog.go:90] GET /healthz: (1.854463ms) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:32.152587  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.121525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.152836  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0827 13:35:32.161771  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.161833  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.161935  108530 httplog.go:90] GET /healthz: (1.292635ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.171639  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.097651ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.192831  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.338538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.193147  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0827 13:35:32.211915  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.354618ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.232737  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.110857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.232971  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0827 13:35:32.251746  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.251867  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.252025  108530 httplog.go:90] GET /healthz: (1.41885ms) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:32.253961  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.375127ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.261451  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.261485  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.261521  108530 httplog.go:90] GET /healthz: (944.805µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.272941  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.384183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.273478  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0827 13:35:32.292296  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.735965ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.312676  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.144301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.312970  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0827 13:35:32.332060  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.59641ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.352596  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.036008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.353203  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0827 13:35:32.355281  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.355352  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.355385  108530 httplog.go:90] GET /healthz: (1.107601ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:32.361676  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.361711  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.361747  108530 httplog.go:90] GET /healthz: (1.245866ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.371405  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (970.193µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.393121  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.616978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.393453  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0827 13:35:32.411833  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.345103ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.433982  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.484242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.434854  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0827 13:35:32.451779  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.451809  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.451878  108530 httplog.go:90] GET /healthz: (809.698µs) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:32.452236  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.795639ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.461432  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.461457  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.461554  108530 httplog.go:90] GET /healthz: (989.555µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.472796  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.337722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.473090  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0827 13:35:32.491883  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.359886ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.512552  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.036479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.512977  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0827 13:35:32.532043  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.553338ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.551663  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.551692  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.551724  108530 httplog.go:90] GET /healthz: (1.017122ms) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:32.552604  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.033754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.552952  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0827 13:35:32.561389  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.561413  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.561439  108530 httplog.go:90] GET /healthz: (952.566µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.571579  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.169681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.592271  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.774633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.592710  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0827 13:35:32.611896  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.429033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.633333  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.705925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.633573  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0827 13:35:32.651779  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.652085  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.652269  108530 httplog.go:90] GET /healthz: (1.586504ms) 0 [Go-http-client/1.1 127.0.0.1:38396]
I0827 13:35:32.652094  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.637016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:32.661616  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.661649  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.661696  108530 httplog.go:90] GET /healthz: (1.117026ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.672693  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.22165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.672903  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0827 13:35:32.692108  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.479254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.713507  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.560852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.713796  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0827 13:35:32.731947  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.353458ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.751879  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.751913  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.751958  108530 httplog.go:90] GET /healthz: (1.337994ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:32.752740  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.293281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.752981  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0827 13:35:32.761396  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.761419  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.761487  108530 httplog.go:90] GET /healthz: (977.084µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.771401  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (986.31µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.792837  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.99964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.793094  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0827 13:35:32.811681  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.179673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.813247  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.147895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.832356  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.902779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.832608  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0827 13:35:32.851914  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.414017ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.852200  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.852224  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.852264  108530 httplog.go:90] GET /healthz: (1.171675ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:32.853488  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.165564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.861861  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.861895  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.861950  108530 httplog.go:90] GET /healthz: (900.785µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.872363  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.893437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.872557  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0827 13:35:32.891857  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.339062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.893466  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.257396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.912558  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.076929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.912803  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0827 13:35:32.932095  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.576895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.934465  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.816329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.951579  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.951633  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.951677  108530 httplog.go:90] GET /healthz: (1.021836ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:32.952324  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.813212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.952614  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0827 13:35:32.961504  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:32.961534  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:32.961578  108530 httplog.go:90] GET /healthz: (1.064834ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.971941  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.481575ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.973942  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.499897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.992439  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.937808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:32.992912  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0827 13:35:33.011792  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.305653ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.013594  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.149469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.032517  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.976102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.032730  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0827 13:35:33.051618  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:33.051650  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:33.051693  108530 httplog.go:90] GET /healthz: (955.307µs) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:33.051926  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.247969ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.053530  108530 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.225124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.061622  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:33.061646  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:33.061690  108530 httplog.go:90] GET /healthz: (830.395µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.072094  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.694719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.072302  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0827 13:35:33.091804  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.277632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.093407  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.119172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.112521  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.015912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.112772  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0827 13:35:33.132132  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.334461ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.133792  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.12911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.152375  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:33.152679  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:33.152906  108530 httplog.go:90] GET /healthz: (2.21459ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:33.152824  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.272999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.153530  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0827 13:35:33.162707  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:33.162740  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:33.162807  108530 httplog.go:90] GET /healthz: (1.376424ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.171661  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.208026ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.173382  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.300298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.192921  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.401901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.193238  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0827 13:35:33.211939  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.426175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.213532  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.201122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.232454  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.917338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.232880  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0827 13:35:33.252488  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:33.252722  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:33.252913  108530 httplog.go:90] GET /healthz: (1.727183ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:33.252529  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.46158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.255262  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.510623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.261459  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:33.261483  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:33.261518  108530 httplog.go:90] GET /healthz: (909.804µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.272262  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.835721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.272552  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0827 13:35:33.292298  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.692814ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.299375  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (6.216392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.312198  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.680959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.312524  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0827 13:35:33.336307  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (5.716311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.338190  108530 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.336044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.352616  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:33.352639  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:33.352689  108530 httplog.go:90] GET /healthz: (2.079169ms) 0 [Go-http-client/1.1 127.0.0.1:38372]
I0827 13:35:33.353130  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.159789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.353330  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0827 13:35:33.361606  108530 httplog.go:90] GET /healthz: (1.012399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.363433  108530 httplog.go:90] GET /api/v1/namespaces/default: (1.215536ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.365626  108530 httplog.go:90] POST /api/v1/namespaces: (1.66094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.367265  108530 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.242285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.371213  108530 httplog.go:90] POST /api/v1/namespaces/default/services: (3.456361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.372353  108530 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (804.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.375096  108530 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.6246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.452140  108530 httplog.go:90] GET /healthz: (1.324378ms) 200 [Go-http-client/1.1 127.0.0.1:38396]
W0827 13:35:33.452737  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452791  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452802  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452817  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452827  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452836  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452858  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452868  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452882  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452890  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:33.452965  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0827 13:35:33.452985  108530 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0827 13:35:33.452993  108530 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0827 13:35:33.453188  108530 shared_informer.go:197] Waiting for caches to sync for scheduler
I0827 13:35:33.453409  108530 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:230
I0827 13:35:33.453437  108530 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:230
I0827 13:35:33.454410  108530 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (677.474µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:33.455116  108530 get.go:250] Starting watch for /api/v1/pods, rv=33791 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m41s
I0827 13:35:33.553361  108530 shared_informer.go:227] caches populated
I0827 13:35:33.553398  108530 shared_informer.go:204] Caches are synced for scheduler 
I0827 13:35:33.553867  108530 reflector.go:120] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.553880  108530 reflector.go:120] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.553897  108530 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.553906  108530 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.553961  108530 reflector.go:120] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.553978  108530 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.553994  108530 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554007  108530 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554049  108530 reflector.go:120] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554062  108530 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554111  108530 reflector.go:120] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554135  108530 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.553895  108530 reflector.go:120] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554306  108530 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554437  108530 reflector.go:120] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554456  108530 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554720  108530 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.554736  108530 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.555283  108530 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (490.69µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38582]
I0827 13:35:33.555287  108530 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (584.88µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38584]
I0827 13:35:33.555368  108530 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (741.837µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:33.555416  108530 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (360.45µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38580]
I0827 13:35:33.555436  108530 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (425.266µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38586]
I0827 13:35:33.555436  108530 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (614.507µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38590]
I0827 13:35:33.555803  108530 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (339.643µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38592]
I0827 13:35:33.555908  108530 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (380.808µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38578]
I0827 13:35:33.555926  108530 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (436.113µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0827 13:35:33.556117  108530 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=33798 labels= fields= timeout=7m36s
I0827 13:35:33.556187  108530 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=33787 labels= fields= timeout=6m8s
I0827 13:35:33.556272  108530 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=33787 labels= fields= timeout=6m11s
I0827 13:35:33.556272  108530 get.go:250] Starting watch for /api/v1/nodes, rv=33790 labels= fields= timeout=7m5s
I0827 13:35:33.556447  108530 reflector.go:120] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.556461  108530 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0827 13:35:33.556535  108530 get.go:250] Starting watch for /api/v1/services, rv=34009 labels= fields= timeout=5m50s
I0827 13:35:33.556586  108530 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=33799 labels= fields= timeout=6m8s
I0827 13:35:33.556783  108530 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=33798 labels= fields= timeout=6m34s
I0827 13:35:33.556967  108530 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=33792 labels= fields= timeout=6m5s
I0827 13:35:33.557108  108530 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=33798 labels= fields= timeout=7m22s
I0827 13:35:33.557131  108530 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (354.634µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0827 13:35:33.557750  108530 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=33798 labels= fields= timeout=8m6s
I0827 13:35:33.653693  108530 shared_informer.go:227] caches populated
I0827 13:35:33.753952  108530 shared_informer.go:227] caches populated
I0827 13:35:33.854193  108530 shared_informer.go:227] caches populated
I0827 13:35:33.954404  108530 shared_informer.go:227] caches populated
I0827 13:35:34.054673  108530 shared_informer.go:227] caches populated
I0827 13:35:34.154820  108530 shared_informer.go:227] caches populated
I0827 13:35:34.255103  108530 shared_informer.go:227] caches populated
I0827 13:35:34.355327  108530 shared_informer.go:227] caches populated
I0827 13:35:34.455554  108530 shared_informer.go:227] caches populated
I0827 13:35:34.555782  108530 shared_informer.go:227] caches populated
I0827 13:35:34.555916  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:34.555928  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:34.555969  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:34.556210  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:34.556275  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:34.557622  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:34.655926  108530 shared_informer.go:227] caches populated
I0827 13:35:34.659019  108530 httplog.go:90] POST /api/v1/nodes: (2.487788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.659365  108530 node_tree.go:93] Added node "node1" in group "" to NodeTree
I0827 13:35:34.661513  108530 httplog.go:90] POST /api/v1/namespaces/kube-system/pods: (1.94526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.661972  108530 scheduling_queue.go:830] About to try and schedule pod kube-system/pod1-system-node-critical
I0827 13:35:34.661993  108530 scheduler.go:530] Attempting to schedule pod: kube-system/pod1-system-node-critical
I0827 13:35:34.662107  108530 scheduler_binder.go:256] AssumePodVolumes for pod "kube-system/pod1-system-node-critical", node "node1"
I0827 13:35:34.662126  108530 scheduler_binder.go:266] AssumePodVolumes for pod "kube-system/pod1-system-node-critical", node "node1": all PVCs bound and nothing to do
I0827 13:35:34.662170  108530 factory.go:610] Attempting to bind pod1-system-node-critical to node1
I0827 13:35:34.663920  108530 httplog.go:90] POST /api/v1/namespaces/kube-system/pods/pod1-system-node-critical/binding: (1.530692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.664328  108530 scheduler.go:667] pod kube-system/pod1-system-node-critical is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0827 13:35:34.666147  108530 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/kube-system/events: (1.602686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.763743  108530 httplog.go:90] GET /api/v1/namespaces/kube-system/pods/pod1-system-node-critical: (1.555377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.765427  108530 httplog.go:90] GET /api/v1/namespaces/kube-system/pods/pod1-system-node-critical: (1.294659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.767700  108530 scheduling_queue.go:830] About to try and schedule pod kube-system/pod2-system-cluster-critical
I0827 13:35:34.767723  108530 scheduler.go:530] Attempting to schedule pod: kube-system/pod2-system-cluster-critical
I0827 13:35:34.767737  108530 httplog.go:90] POST /api/v1/namespaces/kube-system/pods: (1.912882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.767831  108530 scheduler_binder.go:256] AssumePodVolumes for pod "kube-system/pod2-system-cluster-critical", node "node1"
I0827 13:35:34.767860  108530 scheduler_binder.go:266] AssumePodVolumes for pod "kube-system/pod2-system-cluster-critical", node "node1": all PVCs bound and nothing to do
I0827 13:35:34.767909  108530 factory.go:610] Attempting to bind pod2-system-cluster-critical to node1
I0827 13:35:34.769511  108530 httplog.go:90] POST /api/v1/namespaces/kube-system/pods/pod2-system-cluster-critical/binding: (1.385297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.769786  108530 scheduler.go:667] pod kube-system/pod2-system-cluster-critical is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0827 13:35:34.771354  108530 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/kube-system/events: (1.292248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.870007  108530 httplog.go:90] GET /api/v1/namespaces/kube-system/pods/pod2-system-cluster-critical: (1.609007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.872134  108530 httplog.go:90] GET /api/v1/namespaces/kube-system/pods/pod2-system-cluster-critical: (1.412869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.876773  108530 httplog.go:90] DELETE /api/v1/namespaces/kube-system/pods/pod1-system-node-critical: (4.192612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.881339  108530 httplog.go:90] DELETE /api/v1/namespaces/kube-system/pods/pod2-system-cluster-critical: (3.984827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.884027  108530 httplog.go:90] GET /api/v1/namespaces/kube-system/pods/pod1-system-node-critical: (1.130872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.886435  108530 httplog.go:90] GET /api/v1/namespaces/kube-system/pods/pod2-system-cluster-critical: (927.392µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
E0827 13:35:34.887229  108530 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0827 13:35:34.887506  108530 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=33798&timeout=6m34s&timeoutSeconds=394&watch=true: (1.330957955s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38592]
I0827 13:35:34.887519  108530 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=33792&timeout=6m5s&timeoutSeconds=365&watch=true: (1.330823924s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38578]
I0827 13:35:34.887528  108530 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=33798&timeout=8m6s&timeoutSeconds=486&watch=true: (1.329986516s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0827 13:35:34.887636  108530 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=33787&timeout=6m11s&timeoutSeconds=371&watch=true: (1.331624451s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38584]
I0827 13:35:34.887659  108530 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=33787&timeout=6m8s&timeoutSeconds=368&watch=true: (1.331750423s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38580]
I0827 13:35:34.887684  108530 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=33798&timeout=7m22s&timeoutSeconds=442&watch=true: (1.33086344s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38596]
I0827 13:35:34.887736  108530 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=33790&timeout=7m5s&timeoutSeconds=425&watch=true: (1.331681174s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38582]
I0827 13:35:34.887766  108530 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=33798&timeout=7m36s&timeoutSeconds=456&watch=true: (1.331897208s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38372]
I0827 13:35:34.887800  108530 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=33791&timeoutSeconds=341&watch=true: (1.433014283s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38396]
I0827 13:35:34.887831  108530 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=33799&timeout=6m8s&timeoutSeconds=368&watch=true: (1.331565119s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38594]
I0827 13:35:34.887878  108530 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=34009&timeout=5m50s&timeoutSeconds=350&watch=true: (1.331624697s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38590]
I0827 13:35:34.893005  108530 httplog.go:90] DELETE /api/v1/nodes: (5.082685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.893292  108530 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0827 13:35:34.894688  108530 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.212599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
I0827 13:35:34.896828  108530 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.681855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38774]
--- FAIL: TestPodPriorityResolution (4.95s)
    preemption_test.go:425: Expected pod pod1-system-node-critical to have priority system-node-critical but was nil
    preemption_test.go:425: Expected pod pod2-system-cluster-critical to have priority system-cluster-critical but was nil

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190827-132740.xml

Find kube-system/pod1-system-node-critical mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestPreemption 4.86s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemption$
=== RUN   TestPreemption
I0827 13:35:15.101953  108530 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0827 13:35:15.102131  108530 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0827 13:35:15.102270  108530 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0827 13:35:15.102332  108530 master.go:234] Using reconciler: 
I0827 13:35:15.104581  108530 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.104942  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.105166  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.105715  108530 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0827 13:35:15.105910  108530 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.105835  108530 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0827 13:35:15.106415  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.106532  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.106624  108530 store.go:1342] Monitoring events count at <storage-prefix>//events
I0827 13:35:15.106651  108530 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.106721  108530 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0827 13:35:15.107278  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.107296  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.107429  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.107931  108530 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0827 13:35:15.108016  108530 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.108057  108530 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0827 13:35:15.108139  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.108252  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.108336  108530 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0827 13:35:15.108399  108530 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0827 13:35:15.108506  108530 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.108630  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.108650  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.108653  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.108741  108530 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0827 13:35:15.108897  108530 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.109012  108530 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0827 13:35:15.109034  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.109048  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.109122  108530 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0827 13:35:15.109261  108530 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.109302  108530 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0827 13:35:15.109301  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.109359  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.109371  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.109456  108530 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0827 13:35:15.109569  108530 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0827 13:35:15.109595  108530 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.109680  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.109693  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.109697  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.109768  108530 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0827 13:35:15.109822  108530 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0827 13:35:15.110283  108530 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.110390  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.110417  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.110465  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.110482  108530 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0827 13:35:15.110690  108530 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.110723  108530 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0827 13:35:15.110803  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.110824  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.111010  108530 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0827 13:35:15.111117  108530 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0827 13:35:15.111165  108530 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.111304  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.111334  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.111443  108530 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0827 13:35:15.111543  108530 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0827 13:35:15.111679  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.111767  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.111783  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.112222  108530 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0827 13:35:15.112347  108530 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0827 13:35:15.112369  108530 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.112472  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.112497  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.112611  108530 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0827 13:35:15.112691  108530 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0827 13:35:15.112786  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.112872  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.112909  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.112924  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.113054  108530 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0827 13:35:15.113089  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.113154  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.113232  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.113275  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.113372  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.113408  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.113517  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.113605  108530 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0827 13:35:15.113801  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.113899  108530 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.113559  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.113461  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.114082  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.114239  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.114379  108530 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0827 13:35:15.114412  108530 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0827 13:35:15.114914  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.115082  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.115084  108530 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.115335  108530 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.116103  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.116488  108530 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.116790  108530 watch_cache.go:405] Replace watchCache (rev: 32646) 
I0827 13:35:15.117248  108530 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.118184  108530 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.118908  108530 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.119390  108530 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.119493  108530 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.119945  108530 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.120872  108530 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.121573  108530 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.121830  108530 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.122811  108530 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.123568  108530 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.124249  108530 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.124675  108530 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.125432  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.125820  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.126374  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.126582  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.126772  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.126928  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.127194  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.128957  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.129442  108530 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.130616  108530 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.132188  108530 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.132669  108530 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.132980  108530 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.133590  108530 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.133915  108530 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.136297  108530 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.137301  108530 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.137985  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.138694  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.139262  108530 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.139383  108530 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0827 13:35:15.139413  108530 master.go:434] Enabling API group "authentication.k8s.io".
I0827 13:35:15.139436  108530 master.go:434] Enabling API group "authorization.k8s.io".
I0827 13:35:15.139624  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.139759  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.139782  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.140323  108530 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0827 13:35:15.140440  108530 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0827 13:35:15.140490  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.140763  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.140789  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.140982  108530 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0827 13:35:15.141033  108530 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0827 13:35:15.141119  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.141198  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.141210  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.141296  108530 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0827 13:35:15.141312  108530 master.go:434] Enabling API group "autoscaling".
I0827 13:35:15.141438  108530 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.141531  108530 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0827 13:35:15.141617  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.141637  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.141715  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.141763  108530 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0827 13:35:15.141827  108530 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0827 13:35:15.141937  108530 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.142037  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.142055  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.142405  108530 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0827 13:35:15.142551  108530 master.go:434] Enabling API group "batch".
I0827 13:35:15.142616  108530 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0827 13:35:15.143033  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.143033  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.143039  108530 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.143261  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.143320  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.143365  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.143469  108530 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0827 13:35:15.143502  108530 master.go:434] Enabling API group "certificates.k8s.io".
I0827 13:35:15.143638  108530 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0827 13:35:15.143641  108530 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.143721  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.143734  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.143835  108530 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0827 13:35:15.144033  108530 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.144180  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.144233  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.144255  108530 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0827 13:35:15.144189  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.144424  108530 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0827 13:35:15.144462  108530 master.go:434] Enabling API group "coordination.k8s.io".
I0827 13:35:15.144533  108530 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0827 13:35:15.144722  108530 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.144953  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.144988  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.145326  108530 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0827 13:35:15.145415  108530 master.go:434] Enabling API group "extensions".
I0827 13:35:15.145579  108530 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.145727  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.145778  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.145899  108530 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0827 13:35:15.146084  108530 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.146210  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.146250  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.146439  108530 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0827 13:35:15.146509  108530 master.go:434] Enabling API group "networking.k8s.io".
I0827 13:35:15.146580  108530 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.146687  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.146728  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.146838  108530 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0827 13:35:15.146902  108530 master.go:434] Enabling API group "node.k8s.io".
I0827 13:35:15.147464  108530 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.147651  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.147723  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.147886  108530 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0827 13:35:15.148040  108530 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0827 13:35:15.148137  108530 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.148292  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.148353  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.148517  108530 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0827 13:35:15.148586  108530 master.go:434] Enabling API group "policy".
I0827 13:35:15.148615  108530 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0827 13:35:15.148683  108530 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.148800  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.148859  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.149002  108530 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0827 13:35:15.148040  108530 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0827 13:35:15.149220  108530 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0827 13:35:15.149280  108530 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.149373  108530 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0827 13:35:15.148322  108530 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0827 13:35:15.148357  108530 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0827 13:35:15.149451  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.150040  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.150095  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.150017  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.150371  108530 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0827 13:35:15.150397  108530 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0827 13:35:15.150502  108530 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.150766  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.150801  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.151146  108530 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0827 13:35:15.151189  108530 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0827 13:35:15.151712  108530 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.151872  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.151975  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.152913  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.153937  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.154208  108530 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0827 13:35:15.154298  108530 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.154428  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.154479  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.154601  108530 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0827 13:35:15.154729  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.154738  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.154755  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.154962  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.155058  108530 watch_cache.go:405] Replace watchCache (rev: 32647) 
I0827 13:35:15.155147  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.156910  108530 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0827 13:35:15.157116  108530 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.157192  108530 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0827 13:35:15.157239  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.157256  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.157690  108530 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0827 13:35:15.157723  108530 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.157784  108530 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0827 13:35:15.157861  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.157882  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.158219  108530 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0827 13:35:15.158403  108530 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.158680  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.158706  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.158889  108530 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0827 13:35:15.158920  108530 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0827 13:35:15.161277  108530 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0827 13:35:15.161758  108530 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.161924  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.161976  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.162175  108530 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0827 13:35:15.162366  108530 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.162613  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.162701  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.162794  108530 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0827 13:35:15.162837  108530 master.go:434] Enabling API group "scheduling.k8s.io".
I0827 13:35:15.163215  108530 master.go:423] Skipping disabled API group "settings.k8s.io".
I0827 13:35:15.163415  108530 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.163556  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.163614  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.163723  108530 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0827 13:35:15.163912  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.164066  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.164132  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.164273  108530 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0827 13:35:15.164375  108530 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.164569  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.164663  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.164862  108530 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0827 13:35:15.164941  108530 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.165340  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.165410  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.165512  108530 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0827 13:35:15.165698  108530 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.165867  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.165928  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.166020  108530 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0827 13:35:15.166938  108530 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0827 13:35:15.167081  108530 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0827 13:35:15.167308  108530 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0827 13:35:15.167314  108530 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0827 13:35:15.167591  108530 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0827 13:35:15.167601  108530 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0827 13:35:15.167622  108530 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0827 13:35:15.168485  108530 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0827 13:35:15.168438  108530 watch_cache.go:405] Replace watchCache (rev: 32648) 
I0827 13:35:15.168746  108530 watch_cache.go:405] Replace watchCache (rev: 32648) 
I0827 13:35:15.168752  108530 watch_cache.go:405] Replace watchCache (rev: 32648) 
I0827 13:35:15.170249  108530 watch_cache.go:405] Replace watchCache (rev: 32648) 
I0827 13:35:15.170448  108530 watch_cache.go:405] Replace watchCache (rev: 32648) 
I0827 13:35:15.170444  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.170741  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.171003  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.171309  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.171511  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.171769  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.172239  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.172787  108530 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0827 13:35:15.172885  108530 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0827 13:35:15.172948  108530 master.go:434] Enabling API group "storage.k8s.io".
I0827 13:35:15.173184  108530 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.173351  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.173424  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.173637  108530 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0827 13:35:15.173949  108530 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0827 13:35:15.173954  108530 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.174086  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.174105  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.174229  108530 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0827 13:35:15.174280  108530 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0827 13:35:15.174564  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.174587  108530 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.174710  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.174728  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.174872  108530 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0827 13:35:15.175049  108530 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0827 13:35:15.175075  108530 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.175196  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.175212  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.175342  108530 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0827 13:35:15.175489  108530 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0827 13:35:15.175531  108530 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.175660  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.175683  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.175769  108530 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0827 13:35:15.175780  108530 master.go:434] Enabling API group "apps".
I0827 13:35:15.175806  108530 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.175916  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.175936  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.176013  108530 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0827 13:35:15.176073  108530 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.176128  108530 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0827 13:35:15.176419  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.176448  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.176452  108530 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0827 13:35:15.176568  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.176803  108530 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0827 13:35:15.176828  108530 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.176928  108530 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0827 13:35:15.176950  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.176965  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.177073  108530 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0827 13:35:15.177104  108530 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.177199  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.177202  108530 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0827 13:35:15.177212  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.178659  108530 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0827 13:35:15.178785  108530 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0827 13:35:15.178865  108530 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.178787  108530 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0827 13:35:15.179365  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.179711  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.181628  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:15.181759  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:15.182166  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.182268  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.182945  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.183172  108530 store.go:1342] Monitoring events count at <storage-prefix>//events
I0827 13:35:15.183211  108530 master.go:434] Enabling API group "events.k8s.io".
I0827 13:35:15.183225  108530 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0827 13:35:15.183604  108530 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.183800  108530 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.183930  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.184352  108530 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.184505  108530 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.184616  108530 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.184826  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.184932  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.184932  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.185114  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.185199  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.185335  108530 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.185585  108530 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.185820  108530 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.186057  108530 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.186857  108530 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.187249  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.186981  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.188488  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.188551  108530 watch_cache.go:405] Replace watchCache (rev: 32649) 
I0827 13:35:15.188835  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.189747  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.190055  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.191261  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.191572  108530 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.192705  108530 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.192967  108530 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.193589  108530 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.193869  108530 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:15.193967  108530 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0827 13:35:15.194884  108530 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.195140  108530 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.195778  108530 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.196714  108530 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.197941  108530 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.198659  108530 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.198933  108530 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.199947  108530 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.200669  108530 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.200907  108530 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.201779  108530 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:15.201866  108530 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0827 13:35:15.202744  108530 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.203054  108530 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.203739  108530 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.204416  108530 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.204795  108530 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.205959  108530 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.206861  108530 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.209867  108530 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.210418  108530 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.211093  108530 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.212301  108530 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:15.212473  108530 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0827 13:35:15.213719  108530 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.214449  108530 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:15.214603  108530 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0827 13:35:15.215672  108530 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.216381  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.216789  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.217542  108530 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.218504  108530 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.219120  108530 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.219714  108530 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:15.219881  108530 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0827 13:35:15.221075  108530 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.222073  108530 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.222550  108530 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.223770  108530 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.224240  108530 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.224647  108530 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.225683  108530 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.226381  108530 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.226831  108530 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.227653  108530 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.228120  108530 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.228503  108530 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0827 13:35:15.228704  108530 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0827 13:35:15.228812  108530 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0827 13:35:15.230186  108530 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.230980  108530 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.232134  108530 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.232870  108530 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.233755  108530 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4e6b7525-54d2-476a-b498-adeb8ae1e7b9", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0827 13:35:15.236561  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.236589  108530 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0827 13:35:15.236599  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.236606  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.236615  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.236624  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.236651  108530 httplog.go:90] GET /healthz: (182.07µs) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:15.237696  108530 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.045514ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51098]
I0827 13:35:15.239963  108530 httplog.go:90] GET /api/v1/services: (1.158844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51098]
I0827 13:35:15.243286  108530 httplog.go:90] GET /api/v1/services: (868.805µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51098]
I0827 13:35:15.245126  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.245154  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.245165  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.245175  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.245182  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.245215  108530 httplog.go:90] GET /healthz: (171.979µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:15.246475  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.445794ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51098]
I0827 13:35:15.248903  108530 httplog.go:90] GET /api/v1/services: (2.820074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:15.250976  108530 httplog.go:90] GET /api/v1/services: (4.780576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.255323  108530 httplog.go:90] POST /api/v1/namespaces: (8.498084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51098]
I0827 13:35:15.256908  108530 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.102766ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.258421  108530 httplog.go:90] POST /api/v1/namespaces: (1.236317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.260581  108530 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.363761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.264385  108530 httplog.go:90] POST /api/v1/namespaces: (1.487539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.337746  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.337788  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.337797  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.337804  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.337809  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.337878  108530 httplog.go:90] GET /healthz: (660.627µs) 0 [Go-http-client/1.1 127.0.0.1:51100]
I0827 13:35:15.346287  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.346323  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.346336  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.346347  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.346355  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.346383  108530 httplog.go:90] GET /healthz: (240.045µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.437249  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.437281  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.437294  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.437302  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.437308  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.437339  108530 httplog.go:90] GET /healthz: (215.387µs) 0 [Go-http-client/1.1 127.0.0.1:51100]
I0827 13:35:15.446214  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.446247  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.446259  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.446266  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.446276  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.446308  108530 httplog.go:90] GET /healthz: (241.554µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.537477  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.537515  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.537535  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.537546  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.537567  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.537617  108530 httplog.go:90] GET /healthz: (311.19µs) 0 [Go-http-client/1.1 127.0.0.1:51100]
I0827 13:35:15.546146  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.546178  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.546191  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.546202  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.546209  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.546236  108530 httplog.go:90] GET /healthz: (225.548µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.637390  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.637424  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.637443  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.637453  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.637461  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.637494  108530 httplog.go:90] GET /healthz: (262.911µs) 0 [Go-http-client/1.1 127.0.0.1:51100]
I0827 13:35:15.646174  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.646217  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.646230  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.646254  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.646263  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.646288  108530 httplog.go:90] GET /healthz: (250.515µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.737360  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.737396  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.737410  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.737420  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.737429  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.737457  108530 httplog.go:90] GET /healthz: (286.71µs) 0 [Go-http-client/1.1 127.0.0.1:51100]
I0827 13:35:15.746314  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.746348  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.746369  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.746379  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.746388  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.746443  108530 httplog.go:90] GET /healthz: (271.82µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.837391  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.837426  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.837439  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.837450  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.837458  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.837496  108530 httplog.go:90] GET /healthz: (240.007µs) 0 [Go-http-client/1.1 127.0.0.1:51100]
I0827 13:35:15.846208  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.846240  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.846253  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.846263  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.846271  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.846318  108530 httplog.go:90] GET /healthz: (252.296µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:15.937281  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.937315  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.937325  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.937335  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.937343  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.937371  108530 httplog.go:90] GET /healthz: (211.164µs) 0 [Go-http-client/1.1 127.0.0.1:51100]
I0827 13:35:15.946166  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:15.946192  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:15.946201  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:15.946208  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:15.946213  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:15.946232  108530 httplog.go:90] GET /healthz: (188.92µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:16.037357  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:16.037398  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.037411  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:16.037421  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:16.037429  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:16.037464  108530 httplog.go:90] GET /healthz: (270.838µs) 0 [Go-http-client/1.1 127.0.0.1:51100]
I0827 13:35:16.046221  108530 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0827 13:35:16.046251  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.046259  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:16.046266  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:16.046271  108530 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:16.046291  108530 httplog.go:90] GET /healthz: (178.849µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:16.102405  108530 client.go:361] parsed scheme: "endpoint"
I0827 13:35:16.102477  108530 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0827 13:35:16.138246  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.138277  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:16.138288  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:16.138297  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:16.138346  108530 httplog.go:90] GET /healthz: (1.101326ms) 0 [Go-http-client/1.1 127.0.0.1:51100]
I0827 13:35:16.147095  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.147129  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:16.147139  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:16.147148  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:16.147183  108530 httplog.go:90] GET /healthz: (1.097387ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:16.238056  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.127076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0827 13:35:16.238465  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.871373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:16.238515  108530 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.935819ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.238597  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.238612  108530 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0827 13:35:16.238623  108530 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0827 13:35:16.238631  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0827 13:35:16.238691  108530 httplog.go:90] GET /healthz: (1.288151ms) 0 [Go-http-client/1.1 127.0.0.1:51288]
I0827 13:35:16.239567  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (886.159µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0827 13:35:16.240393  108530 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.479773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.241770  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.302192ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0827 13:35:16.242043  108530 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.262328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.242812  108530 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.865151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51100]
I0827 13:35:16.243217  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.131046ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0827 13:35:16.243222  108530 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0827 13:35:16.244310  108530 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (916.68µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.244362  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (762.903µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51286]
I0827 13:35:16.245429  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (734.619µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.245686  108530 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.002276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.245857  108530 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0827 13:35:16.245874  108530 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0827 13:35:16.246828  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.045313ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.246865  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.246885  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.246910  108530 httplog.go:90] GET /healthz: (837.301µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.247967  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (796.375µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.249004  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (694.741µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.250198  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (810.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.252268  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.640677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.252463  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0827 13:35:16.253562  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (920.537µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.256539  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.402217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.256730  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0827 13:35:16.257799  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (854.773µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.259455  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.207246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.260068  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0827 13:35:16.261354  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (883.744µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.263452  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.768318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.263608  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0827 13:35:16.266206  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.446974ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.268091  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.479611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.268292  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0827 13:35:16.269288  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (848.126µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.272580  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.947807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.272779  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0827 13:35:16.273949  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (984.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.276259  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.965506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.276427  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0827 13:35:16.279501  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.893237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.281312  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.234528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.281440  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0827 13:35:16.282300  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (696.396µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.284824  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.038567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.285214  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0827 13:35:16.286503  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.021737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.288677  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.899143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.289042  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0827 13:35:16.290018  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (728µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.292375  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.629273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.292528  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0827 13:35:16.293372  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (731.928µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.295618  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.474841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.296033  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0827 13:35:16.297272  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.060118ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.299663  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.842783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.299855  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0827 13:35:16.301322  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.325861ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.303196  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.539568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.303491  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0827 13:35:16.304532  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (836.498µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.306694  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.567952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.307026  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0827 13:35:16.308119  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (902.869µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.310054  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.444193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.310340  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0827 13:35:16.311642  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.097754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.313386  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.313505  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0827 13:35:16.314410  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (782.873µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.316259  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.613356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.316489  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0827 13:35:16.317513  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (760.653µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.319051  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.179846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.319306  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0827 13:35:16.320315  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (704.961µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.323398  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.317075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.323603  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0827 13:35:16.324572  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (812.248µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.326747  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.43723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.327082  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0827 13:35:16.328353  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.007709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.331428  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.332135  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0827 13:35:16.333300  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (960.563µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.336447  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.73334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.336752  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0827 13:35:16.337921  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.337997  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (917.575µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.338039  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.338559  108530 httplog.go:90] GET /healthz: (1.235602ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:16.340349  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.796448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.340652  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0827 13:35:16.341691  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (765.027µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.343614  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.533279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.343949  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0827 13:35:16.344933  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (805.324µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.346687  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.346806  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.346999  108530 httplog.go:90] GET /healthz: (1.091006ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.347627  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.279214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.347940  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0827 13:35:16.352424  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (4.233431ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.354692  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.926927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.354877  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0827 13:35:16.355883  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (811.399µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.357982  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.617367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.358300  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0827 13:35:16.360246  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.78089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.362043  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.474768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.362425  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0827 13:35:16.363461  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (825.616µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.365404  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.437218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.365670  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0827 13:35:16.366781  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (879.278µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.368668  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.368913  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0827 13:35:16.370059  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (941.803µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.371821  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.300305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.372100  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0827 13:35:16.373720  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.467047ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.375818  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.335396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.376084  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0827 13:35:16.376971  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (716.322µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.378728  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.334691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.378976  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0827 13:35:16.380017  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (806.129µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.382402  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.988179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.382635  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0827 13:35:16.383677  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (835.528µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.385350  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.217801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.385526  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0827 13:35:16.386498  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (784.805µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.389210  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.249011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.389475  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0827 13:35:16.390534  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (805.61µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.397040  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.851045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.397238  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0827 13:35:16.398330  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (895.418µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.400275  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.44115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.400485  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0827 13:35:16.401463  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (746.66µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.403488  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.562935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.403686  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0827 13:35:16.404809  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (848.835µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.408693  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.372418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.409112  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0827 13:35:16.410211  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (811.137µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.412019  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.28008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.412207  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0827 13:35:16.413256  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (829.273µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.415512  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.710888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.416339  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0827 13:35:16.417250  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (731.135µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.422274  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.565139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.422477  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0827 13:35:16.423527  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (834.294µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.426191  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.828708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.426439  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0827 13:35:16.428612  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.818468ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.430708  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.586043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.431000  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0827 13:35:16.432026  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (784.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.433903  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.411183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.434133  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0827 13:35:16.435167  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (847.747µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.437000  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.43805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.437365  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0827 13:35:16.437881  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.437904  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.437953  108530 httplog.go:90] GET /healthz: (847.78µs) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:16.438516  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (861.49µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.440183  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.294713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.440475  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0827 13:35:16.441410  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (758.91µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.443025  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.205604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.443218  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0827 13:35:16.444610  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.178946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.446828  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.446886  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.446912  108530 httplog.go:90] GET /healthz: (932.852µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.458585  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.864959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.458794  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0827 13:35:16.477816  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.114545ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.498717  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.975189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.499047  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0827 13:35:16.518775  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.936708ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.538230  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.538263  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.538297  108530 httplog.go:90] GET /healthz: (1.120311ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:16.538810  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.062727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.539026  108530 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0827 13:35:16.547274  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.547352  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.547456  108530 httplog.go:90] GET /healthz: (1.333608ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.559138  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.409824ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.579630  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.830804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.580082  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0827 13:35:16.604358  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (7.544429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.619482  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.690288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.619730  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0827 13:35:16.638113  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.360012ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.639045  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.639068  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.639101  108530 httplog.go:90] GET /healthz: (2.059379ms) 0 [Go-http-client/1.1 127.0.0.1:51288]
I0827 13:35:16.649592  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.649620  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.649664  108530 httplog.go:90] GET /healthz: (1.570777ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.660255  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.963951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.660476  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0827 13:35:16.678100  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.32896ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.698876  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.044765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.699099  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0827 13:35:16.719762  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (2.995839ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.739224  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.739254  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.739305  108530 httplog.go:90] GET /healthz: (1.552805ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:16.740976  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.180606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.741251  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0827 13:35:16.747268  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.747303  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.747338  108530 httplog.go:90] GET /healthz: (1.17815ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.759520  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (2.78337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.781660  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.8509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.782127  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0827 13:35:16.798276  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.495643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.819937  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.174462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.820283  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0827 13:35:16.837961  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.164816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.838396  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.838433  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.838464  108530 httplog.go:90] GET /healthz: (1.001803ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:16.848639  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.848668  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.848734  108530 httplog.go:90] GET /healthz: (2.692566ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.858680  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.858308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.859138  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0827 13:35:16.878300  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.48715ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.899259  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.410527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.899507  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0827 13:35:16.918317  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.511624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.939314  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.509903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:16.939361  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.939388  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.939419  108530 httplog.go:90] GET /healthz: (1.644579ms) 0 [Go-http-client/1.1 127.0.0.1:51288]
I0827 13:35:16.939541  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0827 13:35:16.947103  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:16.947137  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:16.947186  108530 httplog.go:90] GET /healthz: (1.070966ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.957993  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.239557ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.982449  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.598523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:16.982762  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0827 13:35:16.998268  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.454536ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.019181  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.421426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.019582  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0827 13:35:17.038108  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.038150  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.038185  108530 httplog.go:90] GET /healthz: (885.854µs) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:17.038231  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.381092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.049317  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.049355  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.049413  108530 httplog.go:90] GET /healthz: (1.063139ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.059241  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.461981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.059489  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0827 13:35:17.078238  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.370183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.098539  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.774828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.098764  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0827 13:35:17.118156  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.361895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.138294  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.138333  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.138371  108530 httplog.go:90] GET /healthz: (1.266142ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:17.138912  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.101348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.139105  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0827 13:35:17.147010  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.147064  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.147121  108530 httplog.go:90] GET /healthz: (1.06574ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.158201  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.413408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.180351  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.715764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.181139  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0827 13:35:17.198218  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.406122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.220538  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.749674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.225140  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0827 13:35:17.238189  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.238221  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.238252  108530 httplog.go:90] GET /healthz: (1.105239ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:17.238283  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.456064ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.247653  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.247710  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.247747  108530 httplog.go:90] GET /healthz: (1.649049ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.258490  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.747408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.258746  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0827 13:35:17.278164  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.312469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.299035  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.217439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.299238  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0827 13:35:17.318476  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.557709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.337972  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.338015  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.338061  108530 httplog.go:90] GET /healthz: (938.758µs) 0 [Go-http-client/1.1 127.0.0.1:51288]
I0827 13:35:17.338909  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.101042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.339129  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0827 13:35:17.347047  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.347082  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.347114  108530 httplog.go:90] GET /healthz: (1.052629ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.358024  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.255908ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.379151  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.421885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.379402  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0827 13:35:17.398154  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.356687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.419270  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.465028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.419554  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0827 13:35:17.438074  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.438105  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.438140  108530 httplog.go:90] GET /healthz: (1.010221ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:17.438918  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.249108ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.451041  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.451078  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.451117  108530 httplog.go:90] GET /healthz: (5.056789ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.458562  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.825004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.458715  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0827 13:35:17.477785  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.02918ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.498601  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.875697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.498882  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0827 13:35:17.518015  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.274618ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.537994  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.538024  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.538067  108530 httplog.go:90] GET /healthz: (968.991µs) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:17.539098  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.37883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.539400  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0827 13:35:17.546748  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.546780  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.546808  108530 httplog.go:90] GET /healthz: (840.901µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.557904  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.127355ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.578461  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.781964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.578649  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0827 13:35:17.597823  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.079783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.619460  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.70616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.619654  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0827 13:35:17.638066  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.265395ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.638209  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.638314  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.638448  108530 httplog.go:90] GET /healthz: (1.341345ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:17.647377  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.647511  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.647712  108530 httplog.go:90] GET /healthz: (1.604943ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.658417  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.661128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.658638  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0827 13:35:17.678095  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.284323ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.699324  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.389924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.699631  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0827 13:35:17.718168  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.311498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.737882  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.738040  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.738202  108530 httplog.go:90] GET /healthz: (1.104457ms) 0 [Go-http-client/1.1 127.0.0.1:51288]
I0827 13:35:17.738502  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.687298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.738995  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0827 13:35:17.746996  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.747022  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.747107  108530 httplog.go:90] GET /healthz: (987.751µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.757741  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (980.784µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.778640  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.93688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.778902  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0827 13:35:17.798398  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.574544ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.818620  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.819945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.818924  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0827 13:35:17.838273  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.838317  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.838360  108530 httplog.go:90] GET /healthz: (1.231369ms) 0 [Go-http-client/1.1 127.0.0.1:51288]
I0827 13:35:17.838408  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.642341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.847966  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.847993  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.848028  108530 httplog.go:90] GET /healthz: (1.9166ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.858524  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.751604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.858724  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0827 13:35:17.877790  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.056843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.898752  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.956166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.899069  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0827 13:35:17.918098  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.377242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.939534  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.939571  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.939602  108530 httplog.go:90] GET /healthz: (2.48891ms) 0 [Go-http-client/1.1 127.0.0.1:51288]
I0827 13:35:17.939724  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.958022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:17.940000  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0827 13:35:17.947090  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:17.947219  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:17.947449  108530 httplog.go:90] GET /healthz: (1.326627ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.958191  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.468298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.978880  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.08483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.979093  108530 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0827 13:35:17.997950  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.186357ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:17.999629  108530 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.136794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.018615  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.845272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.018830  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0827 13:35:18.038338  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.038364  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.038401  108530 httplog.go:90] GET /healthz: (1.320913ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:18.039082  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (2.367932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.040445  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (867.891µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.046667  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.046688  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.046713  108530 httplog.go:90] GET /healthz: (708.817µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.058447  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.693899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.058805  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0827 13:35:18.077924  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (991.966µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.085664  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (7.20107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.098504  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.766874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.098892  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0827 13:35:18.118183  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.440128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.119979  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.377468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.138145  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.138299  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.138337  108530 httplog.go:90] GET /healthz: (1.153417ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:18.138913  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.063315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.139177  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0827 13:35:18.146769  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.146793  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.146837  108530 httplog.go:90] GET /healthz: (820.563µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.157795  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.055539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.160216  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.849107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.178577  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.813885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.178916  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0827 13:35:18.197823  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.064057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.199451  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.09621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.218558  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.7576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.218889  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0827 13:35:18.237961  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.237997  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.238035  108530 httplog.go:90] GET /healthz: (898.662µs) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:18.238508  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.657954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.240158  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.203165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.247197  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.247225  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.247455  108530 httplog.go:90] GET /healthz: (1.240008ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.259571  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.856248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.259771  108530 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0827 13:35:18.278119  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.29841ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.279891  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.208368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.298753  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.905967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.299043  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0827 13:35:18.317970  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.224589ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.319357  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.03018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.338535  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.827907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.338574  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.338592  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.338623  108530 httplog.go:90] GET /healthz: (1.494102ms) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:18.339329  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0827 13:35:18.346950  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.346975  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.347006  108530 httplog.go:90] GET /healthz: (783.892µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.357770  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (952.862µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.359218  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.006545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.378458  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.728555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.378771  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0827 13:35:18.397822  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.048416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.399445  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.090293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.418390  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.643781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.418691  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0827 13:35:18.437995  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.180022ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.438022  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.438043  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.438089  108530 httplog.go:90] GET /healthz: (911.624µs) 0 [Go-http-client/1.1 127.0.0.1:51288]
I0827 13:35:18.439378  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (958.085µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.446679  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.446815  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.446994  108530 httplog.go:90] GET /healthz: (1.02174ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.459136  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.419731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.460089  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0827 13:35:18.478289  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.522665ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.480595  108530 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.828015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.499096  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.286605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.499359  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0827 13:35:18.518029  108530 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.197263ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.519661  108530 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.239797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.537821  108530 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0827 13:35:18.537870  108530 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0827 13:35:18.537915  108530 httplog.go:90] GET /healthz: (827.875µs) 0 [Go-http-client/1.1 127.0.0.1:51096]
I0827 13:35:18.539694  108530 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.844848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.539990  108530 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0827 13:35:18.546793  108530 httplog.go:90] GET /healthz: (746.428µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.548509  108530 httplog.go:90] GET /api/v1/namespaces/default: (1.301363ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.550318  108530 httplog.go:90] POST /api/v1/namespaces: (1.406341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.551528  108530 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (845.796µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.559776  108530 httplog.go:90] POST /api/v1/namespaces/default/services: (7.776264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.561645  108530 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.264089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.563752  108530 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.602188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.638141  108530 httplog.go:90] GET /healthz: (884.584µs) 200 [Go-http-client/1.1 127.0.0.1:51288]
W0827 13:35:18.639046  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639122  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639163  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639325  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639387  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639431  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639497  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639554  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639593  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639658  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0827 13:35:18.639790  108530 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0827 13:35:18.639864  108530 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0827 13:35:18.639917  108530 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0827 13:35:18.640219  108530 shared_informer.go:197] Waiting for caches to sync for scheduler
I0827 13:35:18.640476  108530 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:230
I0827 13:35:18.640503  108530 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:230
I0827 13:35:18.641340  108530 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (579.95µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:18.642813  108530 get.go:250] Starting watch for /api/v1/pods, rv=32646 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m25s
I0827 13:35:18.740467  108530 shared_informer.go:227] caches populated
I0827 13:35:18.740499  108530 shared_informer.go:204] Caches are synced for scheduler 
I0827 13:35:18.740821  108530 reflector.go:120] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.740864  108530 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.740884  108530 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.740891  108530 reflector.go:120] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.740906  108530 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.740983  108530 reflector.go:120] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.740993  108530 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741082  108530 reflector.go:120] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741099  108530 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.740872  108530 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.740866  108530 reflector.go:120] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741162  108530 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741446  108530 reflector.go:120] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741469  108530 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741675  108530 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741705  108530 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741935  108530 reflector.go:120] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741947  108530 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.741971  108530 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (685.521µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.742503  108530 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (965.546µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51628]
I0827 13:35:18.742576  108530 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (1.070191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51632]
I0827 13:35:18.742582  108530 reflector.go:120] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.742599  108530 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0827 13:35:18.742632  108530 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (1.077455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51630]
I0827 13:35:18.743027  108530 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (397.098µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51626]
I0827 13:35:18.743184  108530 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (458.553µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51096]
I0827 13:35:18.743341  108530 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (540.307µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51634]
I0827 13:35:18.743584  108530 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (348.435µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51630]
I0827 13:35:18.743586  108530 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=32647 labels= fields= timeout=9m38s
I0827 13:35:18.743677  108530 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (490.779µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51632]
I0827 13:35:18.743753  108530 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=32648 labels= fields= timeout=6m34s
I0827 13:35:18.743759  108530 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=32646 labels= fields= timeout=9m38s
I0827 13:35:18.744285  108530 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (531.24µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51628]
I0827 13:35:18.744444  108530 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=32649 labels= fields= timeout=6m46s
I0827 13:35:18.743922  108530 get.go:250] Starting watch for /api/v1/nodes, rv=32646 labels= fields= timeout=8m18s
I0827 13:35:18.744830  108530 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=32646 labels= fields= timeout=6m40s
I0827 13:35:18.744868  108530 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=32649 labels= fields= timeout=6m1s
I0827 13:35:18.744871  108530 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=32646 labels= fields= timeout=6m1s
I0827 13:35:18.745189  108530 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=32649 labels= fields= timeout=8m6s
I0827 13:35:18.745568  108530 get.go:250] Starting watch for /api/v1/services, rv=32923 labels= fields= timeout=6m33s
I0827 13:35:18.840800  108530 shared_informer.go:227] caches populated
I0827 13:35:18.941040  108530 shared_informer.go:227] caches populated
I0827 13:35:19.041230  108530 shared_informer.go:227] caches populated
I0827 13:35:19.141345  108530 shared_informer.go:227] caches populated
I0827 13:35:19.241581  108530 shared_informer.go:227] caches populated
I0827 13:35:19.341784  108530 shared_informer.go:227] caches populated
I0827 13:35:19.442028  108530 shared_informer.go:227] caches populated
I0827 13:35:19.542229  108530 shared_informer.go:227] caches populated
I0827 13:35:19.642441  108530 shared_informer.go:227] caches populated
E0827 13:35:19.690905  108530 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37463/apis/events.k8s.io/v1beta1/namespaces/permit-pluginc09efd45-f39e-4575-9605-ccf7b9ec3485/events: dial tcp 127.0.0.1:37463: connect: connection refused' (may retry after sleeping)
I0827 13:35:19.742649  108530 shared_informer.go:227] caches populated
I0827 13:35:19.743480  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:19.743690  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:19.744580  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:19.744611  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:19.744713  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:19.744917  108530 reflector.go:241] k8s.io/client-go/informers/factory.go:133: forcing resync
I0827 13:35:19.842997  108530 shared_informer.go:227] caches populated
I0827 13:35:19.846195  108530 httplog.go:90] POST /api/v1/nodes: (2.709116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
I0827 13:35:19.846835  108530 node_tree.go:93] Added node "node1" in group "" to NodeTree
I0827 13:35:19.849215  108530 httplog.go:90] PATCH /api/v1/nodes/node1: (2.2683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
I0827 13:35:19.951309  108530 httplog.go:90] GET /api/v1/nodes/node1: (1.401999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
I0827 13:35:19.952581  108530 panic.go:522] POST /api/v1/namespaces/preemptionb0248f44-fca0-4e7e-850f-86bd1e71828c/pods: (641.398µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51652]
E0827 13:35:19.952657  108530 runtime.go:67] Observed a panic: runtime error: invalid memory address or nil pointer dereference
goroutine 45249 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1.1(0xc03fb9ecc0)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:106 +0xda
panic(0x3b90c00, 0xa7257a0)
	/usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/kubernetes/plugin/pkg/admission/priority.(*Plugin).getDefaultPriorityClass(0xc03afd8180, 0x38e1980, 0xc0002e9801, 0xc03fc767c8)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/plugin/pkg/admission/priority/admission.go:252 +0x55
k8s.io/kubernetes/plugin/pkg/admission/priority.(*Plugin).getDefaultPriority(0xc03afd8180, 0xc03fc767c8, 0xc03fd44400, 0xc03fcf8450, 0x0, 0x0, 0x7ff590)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/plugin/pkg/admission/priority/admission.go:270 +0x2f
k8s.io/kubernetes/plugin/pkg/admission/priority.(*Plugin).admitPod(0xc03afd8180, 0x78bd840, 0xc03fc865a0, 0x4333601, 0x2)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/plugin/pkg/admission/priority/admission.go:184 +0x15c
k8s.io/kubernetes/plugin/pkg/admission/priority.(*Plugin).Admit(0xc03afd8180, 0x7832ce0, 0xc03fd287e0, 0x78bd840, 0xc03fc865a0, 0x78661e0, 0xc03be19520, 0x118, 0xc03af44b40)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/plugin/pkg/admission/priority/admission.go:112 +0x166
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/admission.auditHandler.Admit(0x77c7fe0, 0xc03afd8180, 0x0, 0x7832ce0, 0xc03fd287e0, 0x78bd840, 0xc03fc865a0, 0x78661e0, 0xc03be19520, 0x6, ...)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/admission/audit.go:57 +0x136
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.createHandler.func1(0x7824ca0, 0xc03a94d598, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go:142 +0x219e
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulCreateResource.func1(0xc03fd28660, 0xc03fd22700)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1095 +0xe5
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc03fd28660, 0xc03fd22700)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:300 +0x254
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc03af44f30, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0xa3b
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x434a198, 0xe, 0xc03af44f30, 0xc037224e00, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4e4
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4fa
net/http.HandlerFunc.ServeHTTP(0xc03afbd500, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x5c7
net/http.HandlerFunc.ServeHTTP(0xc03722e4e0, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1f6f
net/http.HandlerFunc.ServeHTTP(0xc03afbd540, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3300)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x527
net/http.HandlerFunc.ServeHTTP(0xc037220c80, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3300)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc03fb9ecc0, 0xc03722c560, 0x7833760, 0xc03a94d588, 0xc03fbc3300)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:111 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:98 +0x1b1

goroutine 45274 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x38f0100, 0xc03fb894e0)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65 +0x7b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0xc03fb67c28, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:47 +0x82
panic(0x38f0100, 0xc03fb894e0)
	/usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc03722c560, 0x7824ce0, 0xc03fd22690, 0xc03fbc3300)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:117 +0x3ef
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x7824ce0, 0xc03fd22690, 0xc03fbc3200)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:47 +0xf3
net/http.HandlerFunc.ServeHTTP(0xc03722e510, 0x7824ce0, 0xc03fd22690, 0xc03fbc3200)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x7824ce0, 0xc03fd22690, 0xc03fbc3100)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x2b8
net/http.HandlerFunc.ServeHTTP(0xc03722e540, 0x7824ce0, 0xc03fd22690, 0xc03fbc3100)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:89 +0x29c
net/http.HandlerFunc.ServeHTTP(0xc03722c580, 0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:44 +0x105
net/http.HandlerFunc.ServeHTTP(0xc03722c5a0, 0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(...)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189
k8s.io/kubernetes/test/integration/scheduler.initTestMaster.func1(0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/scheduler/util.go:121 +0x7b
net/http.HandlerFunc.ServeHTTP(0xc03ae8bce0, 0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/usr/local/go/src/net/http/server.go:1995 +0x44
net/http.serverHandler.ServeHTTP(0xc03ab64340, 0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/usr/local/go/src/net/http/server.go:2774 +0xa8
net/http.(*conn).serve(0xc03fa60c80, 0x7832c20, 0xc03fb70f40)
	/usr/local/go/src/net/http/server.go:1878 +0x851
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2884 +0x2f4
E0827 13:35:19.952706  108530 wrap.go:32] apiserver panic'd on POST /api/v1/namespaces/preemptionb0248f44-fca0-4e7e-850f-86bd1e71828c/pods
2019-08-27 13:35:19.952807 I | http: panic serving 127.0.0.1:51652: runtime error: invalid memory address or nil pointer dereference
goroutine 45249 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1.1(0xc03fb9ecc0)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:106 +0xda
panic(0x3b90c00, 0xa7257a0)
	/usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/kubernetes/plugin/pkg/admission/priority.(*Plugin).getDefaultPriorityClass(0xc03afd8180, 0x38e1980, 0xc0002e9801, 0xc03fc767c8)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/plugin/pkg/admission/priority/admission.go:252 +0x55
k8s.io/kubernetes/plugin/pkg/admission/priority.(*Plugin).getDefaultPriority(0xc03afd8180, 0xc03fc767c8, 0xc03fd44400, 0xc03fcf8450, 0x0, 0x0, 0x7ff590)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/plugin/pkg/admission/priority/admission.go:270 +0x2f
k8s.io/kubernetes/plugin/pkg/admission/priority.(*Plugin).admitPod(0xc03afd8180, 0x78bd840, 0xc03fc865a0, 0x4333601, 0x2)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/plugin/pkg/admission/priority/admission.go:184 +0x15c
k8s.io/kubernetes/plugin/pkg/admission/priority.(*Plugin).Admit(0xc03afd8180, 0x7832ce0, 0xc03fd287e0, 0x78bd840, 0xc03fc865a0, 0x78661e0, 0xc03be19520, 0x118, 0xc03af44b40)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/plugin/pkg/admission/priority/admission.go:112 +0x166
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/admission.auditHandler.Admit(0x77c7fe0, 0xc03afd8180, 0x0, 0x7832ce0, 0xc03fd287e0, 0x78bd840, 0xc03fc865a0, 0x78661e0, 0xc03be19520, 0x6, ...)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/admission/audit.go:57 +0x136
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.createHandler.func1(0x7824ca0, 0xc03a94d598, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go:142 +0x219e
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulCreateResource.func1(0xc03fd28660, 0xc03fd22700)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/installer.go:1095 +0xe5
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc03fd28660, 0xc03fd22700)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:300 +0x254
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc03af44f30, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0xa3b
k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x434a198, 0xe, 0xc03af44f30, 0xc037224e00, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x4e4
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4fa
net/http.HandlerFunc.ServeHTTP(0xc03afbd500, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x5c7
net/http.HandlerFunc.ServeHTTP(0xc03722e4e0, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1f6f
net/http.HandlerFunc.ServeHTTP(0xc03afbd540, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3400)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3300)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x527
net/http.HandlerFunc.ServeHTTP(0xc037220c80, 0x7f1a544b8290, 0xc03a94d588, 0xc03fbc3300)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc03fb9ecc0, 0xc03722c560, 0x7833760, 0xc03a94d588, 0xc03fbc3300)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:111 +0xb3
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:98 +0x1b1

goroutine 45274 [running]:
net/http.(*conn).serve.func1(0xc03fa60c80)
	/usr/local/go/src/net/http/server.go:1769 +0x139
panic(0x38f0100, 0xc03fb894e0)
	/usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0xc03fb67c28, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:54 +0x105
panic(0x38f0100, 0xc03fb894e0)
	/usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc03722c560, 0x7824ce0, 0xc03fd22690, 0xc03fbc3300)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:117 +0x3ef
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x7824ce0, 0xc03fd22690, 0xc03fbc3200)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:47 +0xf3
net/http.HandlerFunc.ServeHTTP(0xc03722e510, 0x7824ce0, 0xc03fd22690, 0xc03fbc3200)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x7824ce0, 0xc03fd22690, 0xc03fbc3100)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x2b8
net/http.HandlerFunc.ServeHTTP(0xc03722e540, 0x7824ce0, 0xc03fd22690, 0xc03fbc3100)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:89 +0x29c
net/http.HandlerFunc.ServeHTTP(0xc03722c580, 0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:44 +0x105
net/http.HandlerFunc.ServeHTTP(0xc03722c5a0, 0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/usr/local/go/src/net/http/server.go:1995 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(...)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189
k8s.io/kubernetes/test/integration/scheduler.initTestMaster.func1(0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/scheduler/util.go:121 +0x7b
net/http.HandlerFunc.ServeHTTP(0xc03ae8bce0, 0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/usr/local/go/src/net/http/server.go:1995 +0x44
net/http.serverHandler.ServeHTTP(0xc03ab64340, 0x7827ba0, 0xc03fc72e00, 0xc03fa47600)
	/usr/local/go/src/net/http/server.go:2774 +0xa8
net/http.(*conn).serve(0xc03fa60c80, 0x7832c20, 0xc03fb70f40)
	/usr/local/go/src/net/http/server.go:1878 +0x851
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2884 +0x2f4
E0827 13:35:19.953733  108530 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0827 13:35:19.954041  108530 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=32647&timeout=9m38s&timeoutSeconds=578&watch=true: (1.210638653s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51640]
I0827 13:35:19.954076  108530 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=32648&timeout=6m34s&timeoutSeconds=394&watch=true: (1.21054855s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51642]
I0827 13:35:19.954166  108530 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=32646&timeout=6m40s&timeoutSeconds=400&watch=true: (1.209653669s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51644]
I0827 13:35:19.954267  108530 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=32649&timeout=6m1s&timeoutSeconds=361&watch=true: (1.209678209s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51632]
I0827 13:35:19.954300  108530 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=32646&timeout=6m1s&timeoutSeconds=361&watch=true: (1.209724182s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51646]
I0827 13:35:19.954362  108530 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=32649&timeout=8m6s&timeoutSeconds=486&watch=true: (1.209607383s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51628]
I0827 13:35:19.954396  108530 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=32923&timeout=6m33s&timeoutSeconds=393&watch=true: (1.209554011s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I0827 13:35:19.954671  108530 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32646&timeout=8m18s&timeoutSeconds=498&watch=true: (1.211043789s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51626]
I0827 13:35:19.954832  108530 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=32646&timeout=9m38s&timeoutSeconds=578&watch=true: (1.211383462s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51636]
I0827 13:35:19.954002  108530 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=32646&timeoutSeconds=385&watch=true: (1.31214951s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51288]
I0827 13:35:19.955100  108530 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=32649&timeout=6m46s&timeoutSeconds=406&watch=true: (1.211283223s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51638]
I0827 13:35:19.961208  108530 httplog.go:90] DELETE /api/v1/nodes: (7.678886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51746]
I0827 13:35:19.961415  108530 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0827 13:35:19.962617  108530 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.024398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51746]
I0827 13:35:19.964641  108530 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.643448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51746]
--- FAIL: TestPreemption (4.86s)
    preemption_test.go:259: Test [basic pod preemption]: Error running pause pod: Error creating pause pod: Post http://127.0.0.1:43149/api/v1/namespaces/preemptionb0248f44-fca0-4e7e-850f-86bd1e71828c/pods: EOF

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190827-132740.xml

Find from mentions in log files | View test history on testgrid


Show 2836 Passed Tests

Show 4 Skipped Tests