This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 8 failed / 2860 succeeded
Started2019-09-19 11:04
Elapsed27m41s
Revision
Buildergke-prow-ssd-pool-1a225945-txmz
Refs master:b8866250
82703:aa77a5ef
pod1d0fdb43-dacd-11e9-87eb-663f9ca08b1f
infra-commitfe9f237a8
pod1d0fdb43-dacd-11e9-87eb-663f9ca08b1f
repok8s.io/kubernetes
repo-commite5483b3b8c17f1df03db264ce49ad97d06d588e4
repos{u'k8s.io/kubernetes': u'master:b88662505d288297750becf968bf307dacf872fa,82703:aa77a5ef3e282d84991730b825e1eee4d09eda69'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0919 11:25:54.666778  108638 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 11:25:54.666797  108638 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 11:25:54.666810  108638 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 11:25:54.666821  108638 master.go:259] Using reconciler: 
I0919 11:25:54.669358  108638 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.669553  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.669741  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.670980  108638 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 11:25:54.674711  108638 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 11:25:54.678552  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.679841  108638 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.680332  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.680422  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.681898  108638 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:25:54.682148  108638 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.683314  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.682018  108638 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:25:54.683414  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.684261  108638 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 11:25:54.684291  108638 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.684448  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.684466  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.684550  108638 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 11:25:54.686231  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.686518  108638 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 11:25:54.686713  108638 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.687025  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.687043  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.687115  108638 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 11:25:54.689170  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.690230  108638 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 11:25:54.690462  108638 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.690635  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.691841  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.690753  108638 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 11:25:54.693359  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.694479  108638 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 11:25:54.694665  108638 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.694783  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.694800  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.694869  108638 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 11:25:54.695836  108638 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 11:25:54.696008  108638 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.696136  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.696153  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.696220  108638 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 11:25:54.697989  108638 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 11:25:54.698139  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.698234  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.698251  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.698349  108638 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 11:25:54.698939  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.699710  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.701494  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.701873  108638 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 11:25:54.702063  108638 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.702181  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.702199  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.702300  108638 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 11:25:54.704061  108638 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 11:25:54.704209  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.704376  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.704467  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.704574  108638 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 11:25:54.705157  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.706853  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.707177  108638 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 11:25:54.707356  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.707491  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.707507  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.707582  108638 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 11:25:54.708955  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.709334  108638 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 11:25:54.709500  108638 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.709615  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.709631  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.709726  108638 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 11:25:54.711184  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.712187  108638 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 11:25:54.712320  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.712429  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.712444  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.712535  108638 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 11:25:54.714240  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.714532  108638 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 11:25:54.714598  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.714773  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.714794  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.714874  108638 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 11:25:54.716624  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.716973  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.716991  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.718261  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.718375  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.718391  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.719626  108638 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 11:25:54.719661  108638 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 11:25:54.720167  108638 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.720364  108638 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.720739  108638 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 11:25:54.721510  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.724378  108638 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.725992  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.727293  108638 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.728327  108638 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.729276  108638 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.729951  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.730193  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.730564  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.731238  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.732111  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.732634  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.733744  108638 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.734317  108638 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.735087  108638 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.735455  108638 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.736319  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.736793  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.737109  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.737319  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.737551  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.737791  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.738052  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.739356  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.739811  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.740788  108638 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.742124  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.742502  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.742979  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.743840  108638 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.744238  108638 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.745207  108638 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.746164  108638 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.747329  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.748395  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.748847  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.749037  108638 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 11:25:54.749123  108638 master.go:461] Enabling API group "authentication.k8s.io".
I0919 11:25:54.749246  108638 master.go:461] Enabling API group "authorization.k8s.io".
I0919 11:25:54.749502  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.749736  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.749833  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.750699  108638 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:25:54.750863  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.751023  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.751043  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.751140  108638 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:25:54.752529  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.752950  108638 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:25:54.753164  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.753288  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.753306  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.753385  108638 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:25:54.754440  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.754870  108638 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:25:54.754886  108638 master.go:461] Enabling API group "autoscaling".
I0919 11:25:54.755065  108638 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.755181  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.755196  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.755272  108638 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:25:54.756583  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.758202  108638 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 11:25:54.758440  108638 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.758681  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.758776  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.758947  108638 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 11:25:54.760678  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.761685  108638 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 11:25:54.761845  108638 master.go:461] Enabling API group "batch".
I0919 11:25:54.761811  108638 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 11:25:54.767162  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.762041  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.768328  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.768425  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.769242  108638 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 11:25:54.769267  108638 master.go:461] Enabling API group "certificates.k8s.io".
I0919 11:25:54.769465  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.769617  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.769634  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.769735  108638 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 11:25:54.771338  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.771920  108638 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:25:54.772158  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.772368  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.772446  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.772741  108638 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:25:54.774169  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.774542  108638 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:25:54.774557  108638 master.go:461] Enabling API group "coordination.k8s.io".
I0919 11:25:54.774581  108638 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 11:25:54.774744  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.774878  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.774898  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.774971  108638 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:25:54.776497  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.776854  108638 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:25:54.776873  108638 master.go:461] Enabling API group "extensions".
I0919 11:25:54.777035  108638 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.777159  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.777175  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.777284  108638 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:25:54.778392  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.779422  108638 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 11:25:54.779944  108638 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 11:25:54.780061  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.780141  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.780299  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.780317  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.781672  108638 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:25:54.781693  108638 master.go:461] Enabling API group "networking.k8s.io".
I0919 11:25:54.781722  108638 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.781760  108638 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:25:54.781854  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.781875  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.783042  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.783318  108638 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 11:25:54.783515  108638 master.go:461] Enabling API group "node.k8s.io".
I0919 11:25:54.783730  108638 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.783425  108638 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 11:25:54.783848  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.783866  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.784796  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.785025  108638 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 11:25:54.785111  108638 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 11:25:54.785225  108638 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.785335  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.785353  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.786423  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.786476  108638 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 11:25:54.786496  108638 master.go:461] Enabling API group "policy".
I0919 11:25:54.786538  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.786677  108638 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 11:25:54.786876  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.786902  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.787867  108638 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:25:54.787897  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.788095  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.788206  108638 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:25:54.788222  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.788237  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.789420  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.789443  108638 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:25:54.789422  108638 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:25:54.789525  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.789688  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.789703  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.790652  108638 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:25:54.790681  108638 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:25:54.790863  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.791009  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.791039  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.791536  108638 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:25:54.791582  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.791717  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.791733  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.791764  108638 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:25:54.792845  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.792913  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.793110  108638 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:25:54.793130  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.793319  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.793425  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.793439  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.793551  108638 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:25:54.794816  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.795456  108638 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:25:54.795483  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.795536  108638 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:25:54.795628  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.795660  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.796126  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.797029  108638 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:25:54.797196  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.797320  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.797336  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.797403  108638 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:25:54.798660  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.798965  108638 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:25:54.798997  108638 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 11:25:54.799297  108638 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:25:54.800324  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.801302  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.801608  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.801629  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.802341  108638 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:25:54.802537  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.802716  108638 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:25:54.802915  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.802933  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.803635  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.804476  108638 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:25:54.804495  108638 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 11:25:54.804565  108638 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:25:54.805267  108638 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 11:25:54.805318  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.805926  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.806347  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.806370  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.807271  108638 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:25:54.807434  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.807955  108638 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:25:54.809377  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.810068  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.810097  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.811025  108638 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:25:54.811065  108638 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.811182  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.811201  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.811272  108638 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:25:54.812794  108638 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 11:25:54.812834  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.812829  108638 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.812922  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.812936  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.813023  108638 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 11:25:54.813714  108638 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 11:25:54.813910  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.814018  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.814031  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.814038  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.814115  108638 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 11:25:54.814891  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.815204  108638 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:25:54.815236  108638 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:25:54.815363  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.815459  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.815474  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.816781  108638 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:25:54.816816  108638 master.go:461] Enabling API group "storage.k8s.io".
I0919 11:25:54.816869  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.816986  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.817089  108638 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:25:54.817188  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.817205  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.818893  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.819757  108638 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 11:25:54.819805  108638 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 11:25:54.819923  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.821030  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.821059  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.820711  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.822581  108638 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 11:25:54.822618  108638 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 11:25:54.822794  108638 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.823348  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.823540  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.823562  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.824441  108638 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 11:25:54.824609  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.824743  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.824769  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.824841  108638 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 11:25:54.826106  108638 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 11:25:54.826132  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.826260  108638 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.826371  108638 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 11:25:54.826386  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.826404  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.827452  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.827562  108638 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 11:25:54.827578  108638 master.go:461] Enabling API group "apps".
I0919 11:25:54.827622  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.827796  108638 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 11:25:54.828007  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.828029  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.829092  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.829228  108638 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:25:54.829260  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.829353  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.829369  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.829395  108638 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:25:54.829944  108638 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:25:54.829971  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.830063  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.830086  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.830154  108638 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:25:54.831016  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.831320  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.831352  108638 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:25:54.831378  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.831468  108638 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:25:54.831478  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.831500  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.832589  108638 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:25:54.832610  108638 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 11:25:54.832660  108638 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.832710  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.832953  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:54.832970  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:54.832976  108638 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:25:54.834015  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.834037  108638 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:25:54.834017  108638 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:25:54.834057  108638 master.go:461] Enabling API group "events.k8s.io".
I0919 11:25:54.834280  108638 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.834519  108638 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.834822  108638 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.834930  108638 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.835057  108638 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.835155  108638 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.835367  108638 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.835453  108638 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.835542  108638 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.835626  108638 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.836656  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.836993  108638 watch_cache.go:405] Replace watchCache (rev: 30684) 
I0919 11:25:54.837091  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.837975  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.838232  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.839136  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.839406  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.840236  108638 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.840503  108638 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.841578  108638 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.842040  108638 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:25:54.842188  108638 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 11:25:54.842982  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.843214  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.843532  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.844512  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.845408  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.846374  108638 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.848275  108638 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.849313  108638 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.853164  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.853564  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.854397  108638 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:25:54.854539  108638 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 11:25:54.855482  108638 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.856798  108638 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.865435  108638 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.866361  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.866969  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.867837  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.868567  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.869252  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.869928  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.870739  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.871499  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:25:54.871633  108638 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 11:25:54.872374  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.873065  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:25:54.873234  108638 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 11:25:54.873900  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.876465  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.876933  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.877804  108638 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.878528  108638 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.879363  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.880224  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:25:54.880400  108638 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 11:25:54.881499  108638 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.882576  108638 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.883050  108638 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.884124  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.884691  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.885056  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.886049  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.886528  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.886932  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.888106  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.888461  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.888903  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:25:54.889048  108638 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 11:25:54.889118  108638 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 11:25:54.890182  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.891011  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.891904  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.892900  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.893978  108638 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"bb8d8f57-b00c-4cdc-a3ca-3c645c627180", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:25:54.899469  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:54.899500  108638 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 11:25:54.899511  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:54.899524  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:54.899534  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:54.899541  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:54.899583  108638 httplog.go:90] GET /healthz: (401.372µs) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:54.900576  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.669106ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36620]
I0919 11:25:54.903495  108638 httplog.go:90] GET /api/v1/services: (1.142842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36620]
I0919 11:25:54.907266  108638 httplog.go:90] GET /api/v1/services: (940.887µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36620]
I0919 11:25:54.909392  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:54.909418  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:54.909430  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:54.909440  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:54.909448  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:54.909480  108638 httplog.go:90] GET /healthz: (198.618µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:54.911097  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.044836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36620]
I0919 11:25:54.912627  108638 httplog.go:90] GET /api/v1/services: (1.196528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:54.912797  108638 httplog.go:90] GET /api/v1/services: (1.346076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:54.913240  108638 httplog.go:90] POST /api/v1/namespaces: (1.33987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36620]
I0919 11:25:54.915237  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (937.527µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:54.917319  108638 httplog.go:90] POST /api/v1/namespaces: (1.787126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:54.918529  108638 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (831.237µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:54.920210  108638 httplog.go:90] POST /api/v1/namespaces: (1.343323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.001473  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.001504  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.001517  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.001526  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.001534  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.001560  108638 httplog.go:90] GET /healthz: (241.506µs) 0 [Go-http-client/1.1 127.0.0.1:36622]
I0919 11:25:55.010096  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.010129  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.010142  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.010151  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.010159  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.010194  108638 httplog.go:90] GET /healthz: (256.353µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.101523  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.101555  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.101573  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.101593  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.101602  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.101656  108638 httplog.go:90] GET /healthz: (261.065µs) 0 [Go-http-client/1.1 127.0.0.1:36622]
I0919 11:25:55.110087  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.110116  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.110129  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.110139  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.110147  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.110173  108638 httplog.go:90] GET /healthz: (229.628µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.205993  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.206036  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.206050  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.206061  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.206071  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.206132  108638 httplog.go:90] GET /healthz: (4.721144ms) 0 [Go-http-client/1.1 127.0.0.1:36622]
I0919 11:25:55.212080  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.212121  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.212134  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.212143  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.212151  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.212184  108638 httplog.go:90] GET /healthz: (232.85µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.301461  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.301498  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.301508  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.301516  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.301524  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.301567  108638 httplog.go:90] GET /healthz: (254.54µs) 0 [Go-http-client/1.1 127.0.0.1:36622]
I0919 11:25:55.310127  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.310158  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.310170  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.310179  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.310188  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.310213  108638 httplog.go:90] GET /healthz: (223.238µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.401460  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.401527  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.401540  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.401549  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.401557  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.401599  108638 httplog.go:90] GET /healthz: (293.7µs) 0 [Go-http-client/1.1 127.0.0.1:36622]
I0919 11:25:55.410138  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.410176  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.410187  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.410196  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.410204  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.410230  108638 httplog.go:90] GET /healthz: (230.457µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.501480  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.501508  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.501520  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.501534  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.501541  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.501572  108638 httplog.go:90] GET /healthz: (235.206µs) 0 [Go-http-client/1.1 127.0.0.1:36622]
I0919 11:25:55.510089  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.510120  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.510131  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.510139  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.510146  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.510180  108638 httplog.go:90] GET /healthz: (228.645µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.601764  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.601798  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.601809  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.601819  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.601827  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.601889  108638 httplog.go:90] GET /healthz: (273.065µs) 0 [Go-http-client/1.1 127.0.0.1:36622]
I0919 11:25:55.610211  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:25:55.610242  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.610265  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.610275  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.610283  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.610342  108638 httplog.go:90] GET /healthz: (294.844µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.666228  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:25:55.666324  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:25:55.703431  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.703461  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.703475  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.703484  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.703535  108638 httplog.go:90] GET /healthz: (1.579634ms) 0 [Go-http-client/1.1 127.0.0.1:36622]
I0919 11:25:55.710857  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.710882  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.710892  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.710900  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.710947  108638 httplog.go:90] GET /healthz: (1.040915ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.802278  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.802311  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.802321  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.802333  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.802372  108638 httplog.go:90] GET /healthz: (1.049567ms) 0 [Go-http-client/1.1 127.0.0.1:36622]
I0919 11:25:55.813971  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.813994  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.814004  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.814012  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.814050  108638 httplog.go:90] GET /healthz: (2.798303ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.900613  108638 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.659922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.900663  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.693971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.902759  108638 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.641982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.902941  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.056108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36768]
I0919 11:25:55.903928  108638 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.721701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.904871  108638 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 11:25:55.906182  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.906207  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:25:55.906217  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:25:55.906225  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:25:55.906240  108638 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.194545ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.906253  108638 httplog.go:90] GET /healthz: (3.227229ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:55.906482  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.95164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36768]
I0919 11:25:55.906572  108638 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.063196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.907944  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.0418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36768]
I0919 11:25:55.909217  108638 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.072733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.909217  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (895.756µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0919 11:25:55.909358  108638 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 11:25:55.909372  108638 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 11:25:55.910555  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:55.910577  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:55.910607  108638 httplog.go:90] GET /healthz: (797.254µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:55.910705  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (903.407µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.911768  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (666.664µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.912802  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (656.166µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.913830  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (703.945µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.914913  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (726.942µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.915883  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (726.751µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.917715  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.51055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.917953  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 11:25:55.919035  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (950.696µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.921035  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.438702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.921236  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 11:25:55.922341  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (792.78µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.924167  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.312385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.924327  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 11:25:55.925303  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (751.752µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.927133  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.387586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.927338  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:25:55.928460  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (847.272µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.930346  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.516758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.930563  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 11:25:55.931485  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (696.164µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.933169  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.315597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.933387  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 11:25:55.934325  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (728.314µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.936317  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.392996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.936631  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 11:25:55.937670  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (821.711µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.939603  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.515025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.939862  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 11:25:55.940989  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (864.972µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.943129  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.636941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.943482  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 11:25:55.944624  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (947.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.946634  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.676173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.947067  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 11:25:55.948236  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.044162ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.949855  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.302523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.949987  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 11:25:55.950813  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (653.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.952679  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.342653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.952976  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 11:25:55.953910  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (739.601µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.955542  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.176999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.955723  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 11:25:55.956577  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (691.79µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.958036  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.063771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.958282  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 11:25:55.959270  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (793.667µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.960913  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.276216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.961220  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 11:25:55.962188  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (698.69µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.964246  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.742236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.964474  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 11:25:55.965256  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (560.18µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.966719  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.150345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.967083  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 11:25:55.967987  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (599.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.969832  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.526278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.970123  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:25:55.971051  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (808.332µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.972480  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.088716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.972704  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 11:25:55.973697  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (853.189µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.975418  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.282224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.976386  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 11:25:55.977354  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (743.178µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.979019  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.305099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.979251  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 11:25:55.980215  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (792.832µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.982045  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.462447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.982390  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 11:25:55.983671  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (852.724µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.985636  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.277736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.985907  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 11:25:55.986815  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (681.604µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.988625  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.21261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.988790  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:25:55.989638  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (709.513µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.991287  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.313056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.991446  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 11:25:55.992293  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (683.332µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.993906  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.231927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.994119  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:25:55.995000  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (632.7µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.996847  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.997160  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 11:25:55.997890  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (609.061µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.999370  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.180547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:55.999636  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:25:56.000722  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (808.97µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.002422  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.002453  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.002483  108638 httplog.go:90] GET /healthz: (1.305552ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:56.002712  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.628411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.002923  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:25:56.003829  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (712.978µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.005583  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.344946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.005805  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:25:56.006806  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (823.698µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.008522  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.403388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.008782  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:25:56.009661  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (710.284µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.010588  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.010610  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.010675  108638 httplog.go:90] GET /healthz: (756.903µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.011475  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.423211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.011777  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:25:56.012745  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (741.131µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.014571  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.415837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.014822  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:25:56.015858  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (825.113µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.017908  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.664032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.018128  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:25:56.019241  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (866.804µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.021009  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.370125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.021180  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:25:56.022119  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (755.118µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.023987  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.569352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.024383  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:25:56.025322  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (752.948µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.027426  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.77067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.027768  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:25:56.028962  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (895.173µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.030983  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.443769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.031390  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:25:56.032625  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.00172ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.034528  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.341119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.034731  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:25:56.035753  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (832.097µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.037735  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.328527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.038030  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:25:56.039154  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (969.334µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.041322  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.700731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.041721  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:25:56.042721  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (834.522µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.044447  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.270703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.044618  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:25:56.045708  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (829.286µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.047511  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.479039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.047849  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:25:56.049041  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.04824ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.050885  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.051074  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:25:56.052124  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (858.967µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.054409  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.683138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.054753  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:25:56.055788  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (853.108µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.057810  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.463442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.058167  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:25:56.059223  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (901.713µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.061311  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.505539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.061506  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:25:56.062726  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.094704ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.064528  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.516942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.064729  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:25:56.065760  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (916.522µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.068137  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.497313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.068903  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:25:56.070424  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.194184ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.081448  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.910081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.081769  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:25:56.100825  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.282149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.102116  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.102147  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.102188  108638 httplog.go:90] GET /healthz: (921.781µs) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:56.110869  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.110898  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.110932  108638 httplog.go:90] GET /healthz: (1.030696ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.121478  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.936578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.121789  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:25:56.141189  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.586845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.162053  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.50953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.162494  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:25:56.181148  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.448748ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.201868  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.317674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.202145  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:25:56.202875  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.202898  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.202946  108638 httplog.go:90] GET /healthz: (1.691779ms) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:56.211311  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.211342  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.211438  108638 httplog.go:90] GET /healthz: (1.471829ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.220995  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.443482ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.241710  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.166687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.242792  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 11:25:56.260946  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.335559ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.281866  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.170963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.282252  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 11:25:56.300769  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.223984ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.302204  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.302231  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.302264  108638 httplog.go:90] GET /healthz: (1.032186ms) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:56.310732  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.310758  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.310788  108638 httplog.go:90] GET /healthz: (859.001µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.321784  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.308299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.322238  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 11:25:56.340892  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.300596ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.362464  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.870028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.362977  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:25:56.380859  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.30685ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.403536  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.016692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.403714  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.403740  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.403773  108638 httplog.go:90] GET /healthz: (2.534032ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:56.404083  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 11:25:56.410908  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.410932  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.410973  108638 httplog.go:90] GET /healthz: (1.06285ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.420869  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.344579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.442088  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.452873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.442362  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:25:56.462180  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.432945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.481881  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.310812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.482153  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 11:25:56.501303  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.75267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.502823  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.502848  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.502890  108638 httplog.go:90] GET /healthz: (1.117815ms) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:56.510721  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.510861  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.511139  108638 httplog.go:90] GET /healthz: (1.219448ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.521355  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.748728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.521714  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:25:56.540900  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.297151ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.561768  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.187716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.562019  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:25:56.581250  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.672051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.603308  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.684828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.603556  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 11:25:56.603697  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.603714  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.603758  108638 httplog.go:90] GET /healthz: (2.478229ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:56.610712  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.610740  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.610774  108638 httplog.go:90] GET /healthz: (883.264µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.620750  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.171517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.641866  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.118573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.642123  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:25:56.660869  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.311846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.682009  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.495402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.682253  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:25:56.700635  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.067343ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.702838  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.702863  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.702894  108638 httplog.go:90] GET /healthz: (1.701387ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:56.710620  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.710669  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.710702  108638 httplog.go:90] GET /healthz: (777.429µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.721635  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.146409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.722018  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:25:56.740968  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.346898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.761658  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.048559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.761890  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:25:56.781066  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.361146ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.801363  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.795559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:56.801610  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:25:56.802284  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.802305  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.802343  108638 httplog.go:90] GET /healthz: (1.006105ms) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:56.811086  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.811117  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.811156  108638 httplog.go:90] GET /healthz: (1.211212ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.820399  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (940.541µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.841946  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.273671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.842174  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:25:56.860708  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.109096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.881902  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.24854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.882132  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:25:56.900816  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.338941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.901926  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.901953  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.901983  108638 httplog.go:90] GET /healthz: (805.707µs) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:56.910834  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:56.910886  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:56.910935  108638 httplog.go:90] GET /healthz: (830.726µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.926554  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.516341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.926892  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:25:56.941010  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.086029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.961520  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.969328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:56.961972  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:25:56.980745  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.038806ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.001676  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.051681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.001912  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:25:57.002678  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.002703  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.002745  108638 httplog.go:90] GET /healthz: (1.328419ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:57.010913  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.011141  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.011176  108638 httplog.go:90] GET /healthz: (1.14378ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.020407  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (942.793µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.041927  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.261202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.042349  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:25:57.060910  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.334524ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.081721  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.12909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.082228  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:25:57.100719  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.13304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.102168  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.102367  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.102593  108638 httplog.go:90] GET /healthz: (1.291863ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:57.110801  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.110827  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.110859  108638 httplog.go:90] GET /healthz: (861.575µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.121746  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.223104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.121948  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:25:57.140921  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.342502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.161773  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.083052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.162107  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:25:57.181252  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.552779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.201605  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.02307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.201897  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:25:57.202099  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.202124  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.202167  108638 httplog.go:90] GET /healthz: (825.936µs) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:57.212704  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.212740  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.212832  108638 httplog.go:90] GET /healthz: (1.009745ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.220850  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.321465ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.242330  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.752961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.242891  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:25:57.260779  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.241934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.281875  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.261002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.282179  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:25:57.300718  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.125194ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.302078  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.302102  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.302129  108638 httplog.go:90] GET /healthz: (863.77µs) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:57.310850  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.310884  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.310925  108638 httplog.go:90] GET /healthz: (976.535µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.321919  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.122936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.322188  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:25:57.340612  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.0832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.361617  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.992397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.361896  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:25:57.381047  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.457118ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.401704  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.153482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.402132  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:25:57.402204  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.402223  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.402414  108638 httplog.go:90] GET /healthz: (1.142427ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:57.410786  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.410818  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.410853  108638 httplog.go:90] GET /healthz: (920.856µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.420755  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.188409ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.441517  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.941071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.441836  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:25:57.460978  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.355175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.481666  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.079473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.482001  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:25:57.501047  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.337252ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.502248  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.502274  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.502323  108638 httplog.go:90] GET /healthz: (1.077726ms) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:57.515775  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.515806  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.516489  108638 httplog.go:90] GET /healthz: (2.169464ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.521665  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.084278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.521900  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:25:57.541047  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.452709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.561847  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.230943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.562215  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:25:57.580950  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.359926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.601691  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.601916  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:25:57.602199  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.602223  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.602256  108638 httplog.go:90] GET /healthz: (940.503µs) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:57.610751  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.610785  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.610823  108638 httplog.go:90] GET /healthz: (868.765µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.620702  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.158923ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.648191  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.03058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.648839  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:25:57.660939  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.326031ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.662812  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.327839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.682016  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.359635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.682926  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 11:25:57.700895  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.271986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.704106  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.804487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.704309  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.704330  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.704364  108638 httplog.go:90] GET /healthz: (2.464159ms) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:57.710864  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.710893  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.710948  108638 httplog.go:90] GET /healthz: (914.308µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.721861  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.239807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.722149  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:25:57.740887  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.338401ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.742840  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.486567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.761844  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.255304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.762121  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:25:57.781273  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.615408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.783228  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.365514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.801968  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.342688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.802180  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:25:57.803949  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.803976  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.804029  108638 httplog.go:90] GET /healthz: (2.313489ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:57.810814  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.810840  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.810874  108638 httplog.go:90] GET /healthz: (917.998µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.820959  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.34317ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.822977  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.386693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.841934  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.300147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.842219  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:25:57.861359  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.750483ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.866324  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.741969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.881798  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.204906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.882100  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:25:57.900623  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.09803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.903290  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.238298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:57.903900  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.903918  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.903949  108638 httplog.go:90] GET /healthz: (2.732647ms) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:57.910590  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:57.910751  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:57.910966  108638 httplog.go:90] GET /healthz: (1.047536ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.923271  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.231317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.923727  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:25:57.941006  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.434146ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.943171  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.447061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.961967  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.33033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.962511  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 11:25:57.980893  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.342997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:57.984279  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.729288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:58.004535  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.855221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:58.005015  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:25:58.005378  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:58.005531  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:58.005761  108638 httplog.go:90] GET /healthz: (4.383123ms) 0 [Go-http-client/1.1 127.0.0.1:36770]
I0919 11:25:58.010877  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:58.010908  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:58.011149  108638 httplog.go:90] GET /healthz: (1.124859ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.021357  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.643185ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.023761  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.512291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.042004  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.438579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.042543  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:25:58.061166  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.515432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.065981  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.165278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.081703  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.108663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.082231  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:25:58.101592  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.922606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.102452  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:58.102477  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:58.102517  108638 httplog.go:90] GET /healthz: (1.195678ms) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:58.104256  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.399254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.110973  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:58.111130  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:58.111299  108638 httplog.go:90] GET /healthz: (1.338432ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.123439  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.877006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.123872  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:25:58.140999  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.236366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.142580  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.132026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.161970  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.369812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.162180  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:25:58.181005  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.385043ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.184197  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.487121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.201887  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.342547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.202160  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:25:58.202192  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:25:58.202226  108638 httplog.go:90] GET /healthz: (969.971µs) 0 [Go-http-client/1.1 127.0.0.1:36618]
I0919 11:25:58.202577  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:25:58.211168  108638 httplog.go:90] GET /healthz: (1.08184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.212858  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.269873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.215350  108638 httplog.go:90] POST /api/v1/namespaces: (1.939074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.217114  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.153403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.228710  108638 httplog.go:90] POST /api/v1/namespaces/default/services: (10.963664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.230468  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.298298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.232813  108638 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.896079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.303029  108638 httplog.go:90] GET /healthz: (1.684823ms) 200 [Go-http-client/1.1 127.0.0.1:36770]
W0919 11:25:58.303810  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303857  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303869  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303897  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303906  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303914  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303922  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303933  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303941  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303950  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:25:58.303991  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:25:58.304007  108638 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 11:25:58.304017  108638 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 11:25:58.304180  108638 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 11:25:58.304410  108638 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 11:25:58.304422  108638 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 11:25:58.305396  108638 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (658.961µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:25:58.306380  108638 get.go:251] Starting watch for /api/v1/pods, rv=30684 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m7s
I0919 11:25:58.404348  108638 shared_informer.go:227] caches populated
I0919 11:25:58.404378  108638 shared_informer.go:204] Caches are synced for scheduler 
I0919 11:25:58.404739  108638 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.404759  108638 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.405164  108638 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.405182  108638 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.405584  108638 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.405599  108638 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.406018  108638 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.406043  108638 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.406387  108638 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.406412  108638 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.406827  108638 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.406851  108638 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.407153  108638 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.407174  108638 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.407497  108638 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.407529  108638 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.407920  108638 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.407939  108638 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.408674  108638 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (474.878µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0919 11:25:58.408676  108638 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (559.052µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:58.408881  108638 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.408899  108638 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 11:25:58.408987  108638 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (725.405µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36818]
I0919 11:25:58.409160  108638 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (331.806µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36808]
I0919 11:25:58.409609  108638 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (355.198µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36810]
I0919 11:25:58.410057  108638 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (441.513µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:25:58.410079  108638 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (346.584µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36812]
I0919 11:25:58.410379  108638 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (445.767µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36818]
I0919 11:25:58.410666  108638 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (440.271µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36814]
I0919 11:25:58.410914  108638 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30684 labels= fields= timeout=9m39s
I0919 11:25:58.411194  108638 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30684 labels= fields= timeout=5m33s
I0919 11:25:58.411289  108638 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30684 labels= fields= timeout=9m18s
I0919 11:25:58.411518  108638 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30684 labels= fields= timeout=8m48s
I0919 11:25:58.411826  108638 get.go:251] Starting watch for /api/v1/nodes, rv=30684 labels= fields= timeout=9m29s
I0919 11:25:58.411665  108638 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (854.017µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36816]
I0919 11:25:58.412056  108638 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30684 labels= fields= timeout=7m23s
I0919 11:25:58.412139  108638 get.go:251] Starting watch for /api/v1/services, rv=30908 labels= fields= timeout=8m4s
I0919 11:25:58.412585  108638 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30684 labels= fields= timeout=8m49s
I0919 11:25:58.412614  108638 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30684 labels= fields= timeout=7m41s
I0919 11:25:58.412680  108638 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30684 labels= fields= timeout=5m1s
I0919 11:25:58.504653  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504680  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504686  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504692  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504698  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504703  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504709  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504714  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504720  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504730  108638 shared_informer.go:227] caches populated
I0919 11:25:58.504739  108638 shared_informer.go:227] caches populated
I0919 11:25:58.507559  108638 httplog.go:90] POST /api/v1/nodes: (2.419322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:58.508161  108638 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0919 11:25:58.512700  108638 httplog.go:90] PUT /api/v1/nodes/testnode/status: (3.262109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:58.515534  108638 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods: (2.271509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:58.515989  108638 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pidpressure-fake-name
I0919 11:25:58.516004  108638 scheduler.go:530] Attempting to schedule pod: node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pidpressure-fake-name
I0919 11:25:58.516119  108638 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pidpressure-fake-name", node "testnode"
I0919 11:25:58.516134  108638 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0919 11:25:58.516181  108638 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0919 11:25:58.518534  108638 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name/binding: (2.125927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:58.518788  108638 scheduler.go:662] pod node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0919 11:25:58.520732  108638 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/events: (1.644803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:58.621267  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (4.42323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:58.717682  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.514835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:58.817846  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.684276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:58.917587  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.413348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.017877  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.645742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.119052  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.609023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.217614  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.462477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.320205  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.079948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.409573  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:25:59.410907  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:25:59.411099  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:25:59.411291  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:25:59.411637  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:25:59.412203  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:25:59.423797  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (7.626009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.517732  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.505821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.618193  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.976586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.718151  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.90227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.818711  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.407699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:25:59.918119  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.818396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.018459  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.245351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.118188  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.954974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.218056  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.8287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.317943  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.690955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.409702  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:00.411062  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:00.411260  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:00.411406  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:00.411730  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:00.412358  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:00.417955  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.676646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.517871  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.659437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.617674  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.430865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.718119  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.821881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.817952  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.740564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:00.917994  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.733804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.018072  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.870465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.118163  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.673404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.217943  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.770309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.317585  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.39423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.409814  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:01.411241  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:01.411391  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:01.411533  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:01.411870  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:01.412492  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:01.418051  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.797041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.518038  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.79577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.617944  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.647436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.717913  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.673937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.817994  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.63383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:01.918135  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.780763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.018235  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.949161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.119880  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.76363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.217818  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.543799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.317829  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.552537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.409985  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:02.411692  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:02.412000  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:02.412298  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:02.412319  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:02.412619  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:02.418032  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.799173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.518173  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.901516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.618358  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.896646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.718060  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.813368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.817874  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.60802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:02.918038  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.799569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.017720  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.482906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.117499  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.314155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.219813  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (3.636991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.318048  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.798093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.410203  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:03.411893  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:03.412445  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:03.412534  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:03.412780  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:03.413158  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:03.417841  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.459657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.519748  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.516786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.617746  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.369793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.717929  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.728512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.818525  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.29604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:03.917875  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.641683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.018083  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.857853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.118988  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.771177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.218559  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.749638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.317698  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.422308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.410389  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:04.412049  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:04.412601  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:04.412706  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:04.412953  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:04.413270  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:04.417821  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.623435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.517873  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.700635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.618550  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.734835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.718520  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.484855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.820099  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (3.766414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:04.918040  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.896867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.018185  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.984685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.117820  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.646421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.217570  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.357259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.319548  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.959377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.412726  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:05.413124  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:05.413396  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:05.414010  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:05.414372  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:05.414465  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:05.417814  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.64352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.517678  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.43676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.617878  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.650433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.717721  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.50067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.817803  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.580485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:05.917959  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.74236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.017938  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.739992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.117915  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.673368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.218061  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.822399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.321145  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.706854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.412932  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:06.413256  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:06.413517  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:06.414107  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:06.414466  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:06.414579  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:06.419125  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.9956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.517914  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.736543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.618012  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.774692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.717903  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.698349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.820710  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (4.430582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:06.917676  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.34052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.023105  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (6.891699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.117829  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.639898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.218933  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.460212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.317909  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.666033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.413172  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:07.413389  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:07.413698  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:07.414294  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:07.414617  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:07.414752  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:07.417977  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.790654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.517816  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.622941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.617771  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.54939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.717999  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.76677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.817807  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.531646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:07.917763  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.588916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:08.017538  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.361434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:08.118614  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.54369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:08.213575  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.684369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:08.215583  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.488265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:08.217487  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.379552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:08.218297  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.465157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:08.317834  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.599784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:08.413275  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:08.413555  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:08.413842  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:08.414444  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:08.414756  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:08.414881  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:08.417972  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.437475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:08.517834  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.604584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:08.617564  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.411395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:08.717900  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.760371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:08.818025  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.75546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:08.917872  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.69863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.019675  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.998318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.117790  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.564515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.217895  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.661112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.318453  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.495132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.413467  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:09.413739  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:09.413967  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:09.414558  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:09.414913  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:09.415476  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:09.417615  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.435897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.518317  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.722163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.617596  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.353489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.718680  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.905697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.817783  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.539675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:09.917788  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.473124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.017863  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.577962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.117739  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.499541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.217585  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.35519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.317810  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.540914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.413637  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:10.413903  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:10.414095  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:10.414702  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:10.415078  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:10.415666  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:10.417609  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.415736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.517724  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.472216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.617752  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.592472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.717821  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.456288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.817553  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.365553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:10.917665  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.397013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.018269  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.711235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.118401  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.456145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.217801  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.602358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.332898  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (5.443231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.413804  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:11.414027  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:11.414222  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:11.414851  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:11.415345  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:11.415843  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:11.417776  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.358807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.517603  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.360177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.617607  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.434249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.717618  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.459092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.817814  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.580365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:11.917923  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.552655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.017934  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.576794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.119738  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (3.527059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.219016  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.217199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.317772  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.485872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.414107  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:12.414146  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:12.414376  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:12.414954  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:12.415607  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:12.416093  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:12.417994  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.682757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.520301  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (4.119415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.617581  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.426496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.718609  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.494832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.818571  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.901102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:12.917977  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.662253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.017802  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.593434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.118235  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.026399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.217423  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.302171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.318202  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.922412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.414268  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:13.414291  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:13.415104  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:13.415128  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:13.415781  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:13.416254  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:13.418008  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.533195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.518385  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.463096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.618354  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.368547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.718106  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.58171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.818132  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.556426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:13.917940  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.562674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.018415  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.991037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.117705  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.417174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.217883  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.635853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.317743  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.520798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.414446  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:14.414467  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:14.415208  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:14.415259  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:14.416104  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:14.416393  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:14.417936  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.72194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.518048  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.688025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.617698  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.401808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.717968  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.80222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.818542  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.362282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:14.918199  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.985251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.018119  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.519945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.121903  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (5.349865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.218610  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.852874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.317698  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.384031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.414609  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:15.414680  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:15.415373  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:15.415467  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:15.416724  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:15.416809  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:15.417930  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.527568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.517878  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.66547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.617926  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.694738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.717830  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.652183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.818139  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.866663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:15.925849  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.40648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.017818  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.598827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.118023  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.809553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.220411  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (4.081667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.318251  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.61274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.414811  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:16.414850  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:16.415699  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:16.415713  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:16.417441  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:16.417546  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:16.417908  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.375128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.521846  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (4.886356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.619812  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.933609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.717853  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.636721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.817936  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.747329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:16.919659  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.510099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.017861  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.600465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.118244  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.692036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.217603  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.345628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.317807  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.587273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.414930  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:17.414963  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:17.415853  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:17.416045  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:17.417474  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.297748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.417608  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:17.417679  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:17.517462  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.320684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.617459  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.320157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.717775  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.536894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.818437  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.17939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:17.917566  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.366565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.019009  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.869654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.117706  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.493877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.213682  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.683067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.217069  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.82118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.217293  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.177771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:18.218719  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.138858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.319279  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (3.111409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.415112  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:18.415161  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:18.416106  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:18.416141  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:18.417739  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:18.417811  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:18.420068  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (3.280996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.518695  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.586562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.620303  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.691923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.718849  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.715289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.818140  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.935818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:18.917837  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.380863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.019713  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (3.474906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.117567  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.326025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.217778  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.565309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.317737  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.497656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.415350  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:19.415383  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:19.416341  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:19.416342  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:19.417780  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.511376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.418110  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:19.418187  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:19.518156  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.972931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.617811  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.646301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.719393  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.411788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.817950  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.720801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:19.917480  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.285934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.017512  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.266001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.118047  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.467626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.218292  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.104394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.317849  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.64469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.415514  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:20.415547  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:20.416460  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:20.416510  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:20.417599  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.442409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.418244  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:20.418360  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:20.517950  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.711438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.617771  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.556732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.717749  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.562708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.817876  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.648784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:20.917693  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.496861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.017728  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.488693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.117806  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.556299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.217657  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.459904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.317742  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.556499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.415696  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:21.415720  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:21.416634  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:21.416721  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:21.417626  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.37967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.418397  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:21.418504  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:21.518147  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.608906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.618619  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.723717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.717813  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.422491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.818203  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.37551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:21.917854  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.585713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.017879  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.624569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.119605  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.400781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.217797  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.464635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.317921  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.595429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.415863  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:22.415869  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:22.416842  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:22.416901  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:22.417834  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.605768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.418553  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:22.418760  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:22.517687  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.291825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.618065  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.801539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.719959  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.722721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.818112  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.838988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:22.917828  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.581001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.017831  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.54046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.117602  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.284702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.217760  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.385711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.317889  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.629506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.416510  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:23.416546  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:23.416971  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:23.417064  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:23.418232  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.997361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.418733  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:23.418916  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:23.517486  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.305392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.618132  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.570097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.717934  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.647422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.818726  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.722581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:23.917909  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.665303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.017813  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.568272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.117532  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.334162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.217608  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.370592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.317598  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.386567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.416702  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:24.416728  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:24.417373  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:24.417427  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:24.417608  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.38395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.419028  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:24.419052  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:24.517999  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.794002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.618098  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.477481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.719725  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (3.540063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.818218  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.739033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:24.917777  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.563713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.017758  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.51669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.117591  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.420578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.217793  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.630841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.317899  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.611171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.416864  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:25.416876  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:25.417526  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:25.417547  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:25.417964  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.703593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.419197  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:25.419233  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:25.517879  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.59269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.619254  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (3.014165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.717768  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.478951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.817592  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.390416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:25.917898  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.650468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.017680  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.352203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.118505  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.303427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.218384  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.526711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.318609  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.925492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.417033  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:26.417112  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:26.417449  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.221169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.417764  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:26.417788  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:26.419324  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:26.419387  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:26.518586  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.456134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.617605  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.423059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.718122  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.75482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.817785  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.591677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:26.917428  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.270261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.017572  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.366174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.117623  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.391447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.218047  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.401107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.318377  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (2.041194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.417199  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:27.417214  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:27.417638  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.420212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.417910  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:27.417927  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:27.419458  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:27.419531  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:27.517830  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.563727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.617603  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.40432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.717752  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.484904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.817590  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.327028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:27.917503  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.26625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:28.018036  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.876096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:28.117699  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.520021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:28.213806  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.807028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:28.216415  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.124856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:28.217870  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (999.009µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0919 11:26:28.218078  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.964188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:28.317893  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.588115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:28.417297  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:28.417375  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:28.418058  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:28.418110  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:28.418656  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.643779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:28.419629  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:28.419734  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:26:28.517814  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.491123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:28.519467  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (1.260405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:28.524562  108638 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (4.436824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:28.526986  108638 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pods/pidpressure-fake-name: (913.929µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
E0919 11:26:28.527686  108638 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0919 11:26:28.527917  108638 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30684&timeout=9m39s&timeoutSeconds=579&watch=true: (30.117286813s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36822]
I0919 11:26:28.527931  108638 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30908&timeout=8m4s&timeoutSeconds=484&watch=true: (30.116140981s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36818]
I0919 11:26:28.527944  108638 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30684&timeout=9m18s&timeoutSeconds=558&watch=true: (30.116868848s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0919 11:26:28.528011  108638 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30684&timeout=8m48s&timeoutSeconds=528&watch=true: (30.116602809s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36808]
I0919 11:26:28.528082  108638 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30684&timeout=8m49s&timeoutSeconds=529&watch=true: (30.115717488s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36812]
I0919 11:26:28.528097  108638 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30684&timeout=5m1s&timeoutSeconds=301&watch=true: (30.115722467s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36816]
I0919 11:26:28.528098  108638 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30684&timeoutSeconds=427&watch=true: (30.222056863s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36770]
I0919 11:26:28.527944  108638 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30684&timeout=7m23s&timeoutSeconds=443&watch=true: (30.116046113s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36826]
I0919 11:26:28.528181  108638 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30684&timeout=7m41s&timeoutSeconds=461&watch=true: (30.115828869s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36814]
I0919 11:26:28.528214  108638 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30684&timeout=5m33s&timeoutSeconds=333&watch=true: (30.117248071s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0919 11:26:28.528039  108638 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30684&timeout=9m29s&timeoutSeconds=569&watch=true: (30.116450242s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0919 11:26:28.532411  108638 httplog.go:90] DELETE /api/v1/nodes: (4.368367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:28.532577  108638 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 11:26:28.533704  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (930.481µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0919 11:26:28.535419  108638 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.359961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
--- FAIL: TestNodePIDPressure (33.87s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-111826.xml

Find node-pid-pressure73889337-4204-42c1-81da-2ea9d5b294d8/pidpressure-fake-name mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.19s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W0919 11:28:03.619034  108638 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 11:28:03.619056  108638 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 11:28:03.619077  108638 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 11:28:03.619085  108638 master.go:259] Using reconciler: 
I0919 11:28:03.621138  108638 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.621425  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.621457  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.622211  108638 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 11:28:03.622319  108638 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 11:28:03.622369  108638 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.623178  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.625298  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.625367  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.626325  108638 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:28:03.626378  108638 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.626572  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.626597  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.626712  108638 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:28:03.627912  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.628392  108638 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 11:28:03.628477  108638 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 11:28:03.628426  108638 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.629114  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.629153  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.629257  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.629973  108638 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 11:28:03.630030  108638 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 11:28:03.630147  108638 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.630959  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.631765  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.631851  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.638129  108638 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 11:28:03.638220  108638 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 11:28:03.638270  108638 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.638467  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.638485  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.639978  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.641357  108638 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 11:28:03.641426  108638 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 11:28:03.641519  108638 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.641739  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.641757  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.642292  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.642558  108638 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 11:28:03.642622  108638 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 11:28:03.642713  108638 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.642898  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.642914  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.643658  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.643789  108638 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 11:28:03.643836  108638 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 11:28:03.643982  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.644226  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.644317  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.645287  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.646472  108638 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 11:28:03.646618  108638 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.646825  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.646850  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.646860  108638 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 11:28:03.647840  108638 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 11:28:03.647989  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.648168  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.648189  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.648267  108638 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 11:28:03.651322  108638 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 11:28:03.651356  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.651503  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.651566  108638 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 11:28:03.651743  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.651765  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.653474  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.654068  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.654464  108638 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 11:28:03.654498  108638 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 11:28:03.654699  108638 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.654927  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.654968  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.655578  108638 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 11:28:03.655674  108638 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 11:28:03.655825  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.655962  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.656054  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.656083  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.656754  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.657224  108638 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 11:28:03.657269  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.657289  108638 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 11:28:03.657542  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.657570  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.659484  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.661309  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.661341  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.662244  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.662459  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.662486  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.663070  108638 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 11:28:03.663103  108638 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 11:28:03.663117  108638 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 11:28:03.663521  108638 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.663759  108638 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.664464  108638 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.665166  108638 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.666032  108638 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.666724  108638 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.667165  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.667308  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.667503  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.668064  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.668931  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.669341  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.670596  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.670687  108638 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.671225  108638 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.672294  108638 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.672669  108638 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.674283  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.674940  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.675615  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.675923  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.676225  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.676365  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.676531  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.677359  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.677879  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.678835  108638 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.679834  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.680303  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.680899  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.682287  108638 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.682873  108638 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.683781  108638 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.684760  108638 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.685390  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.686250  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.686532  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.686813  108638 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 11:28:03.686971  108638 master.go:461] Enabling API group "authentication.k8s.io".
I0919 11:28:03.687059  108638 master.go:461] Enabling API group "authorization.k8s.io".
I0919 11:28:03.687352  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.687841  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.687899  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.688841  108638 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:28:03.688882  108638 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:28:03.689023  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.689704  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.689733  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.689762  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.691311  108638 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:28:03.691418  108638 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:28:03.691689  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.692015  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.692133  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.697173  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.697188  108638 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:28:03.697211  108638 master.go:461] Enabling API group "autoscaling".
I0919 11:28:03.697242  108638 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:28:03.697382  108638 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.697600  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.697631  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.698526  108638 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 11:28:03.698586  108638 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 11:28:03.698775  108638 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.699003  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.699031  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.699062  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.699832  108638 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 11:28:03.699857  108638 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 11:28:03.699859  108638 master.go:461] Enabling API group "batch".
I0919 11:28:03.700123  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.700123  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.700320  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.700334  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.700770  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.701260  108638 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 11:28:03.701286  108638 master.go:461] Enabling API group "certificates.k8s.io".
I0919 11:28:03.701348  108638 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 11:28:03.701438  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.701678  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.701709  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.703145  108638 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:28:03.703196  108638 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:28:03.703690  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.703870  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.703891  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.704612  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.705355  108638 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:28:03.705377  108638 master.go:461] Enabling API group "coordination.k8s.io".
I0919 11:28:03.705392  108638 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 11:28:03.705415  108638 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:28:03.705510  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.705704  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.705729  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.706150  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.706483  108638 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:28:03.706510  108638 master.go:461] Enabling API group "extensions".
I0919 11:28:03.706531  108638 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:28:03.706664  108638 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.706708  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.706832  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.706845  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.710266  108638 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 11:28:03.710339  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.710417  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.710507  108638 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 11:28:03.710592  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.710612  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.711272  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.715976  108638 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:28:03.716000  108638 master.go:461] Enabling API group "networking.k8s.io".
I0919 11:28:03.716029  108638 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.716165  108638 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:28:03.716209  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.716230  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.717144  108638 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 11:28:03.717165  108638 master.go:461] Enabling API group "node.k8s.io".
I0919 11:28:03.717224  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.717264  108638 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 11:28:03.717323  108638 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.717514  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.717546  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.718801  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.718884  108638 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 11:28:03.718935  108638 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 11:28:03.719052  108638 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.719191  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.719212  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.729265  108638 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 11:28:03.729296  108638 master.go:461] Enabling API group "policy".
I0919 11:28:03.729348  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.729373  108638 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 11:28:03.729494  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.729515  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.730124  108638 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:28:03.730298  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.730319  108638 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:28:03.730428  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.730447  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.731348  108638 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:28:03.731378  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.731482  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.731496  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.731547  108638 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:28:03.732223  108638 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:28:03.732346  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.732462  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.732478  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.732535  108638 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:28:03.733244  108638 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:28:03.733288  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.733393  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.733413  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.733486  108638 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:28:03.734356  108638 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:28:03.734494  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.734716  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.734736  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.734813  108638 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:28:03.735612  108638 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:28:03.735701  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.735840  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.735863  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.735962  108638 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:28:03.736543  108638 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:28:03.736615  108638 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:28:03.736705  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.736835  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.736850  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.737394  108638 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:28:03.737421  108638 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 11:28:03.737886  108638 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:28:03.740749  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.741361  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.741452  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.742113  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.742205  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.742524  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.742819  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.743068  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.743481  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.743589  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.743608  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.743667  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.744588  108638 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:28:03.744751  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.744888  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.744909  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.744990  108638 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:28:03.745878  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.746297  108638 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:28:03.746318  108638 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 11:28:03.746391  108638 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 11:28:03.746511  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.746637  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.746681  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.746634  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.746760  108638 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:28:03.747835  108638 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:28:03.748012  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.748169  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.748187  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.748202  108638 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:28:03.748993  108638 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:28:03.749034  108638 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.749171  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.749193  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.749283  108638 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:28:03.749977  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.750049  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.750996  108638 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 11:28:03.751024  108638 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 11:28:03.751149  108638 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.751616  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.751635  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.751877  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.752476  108638 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 11:28:03.752672  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.752792  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.752810  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.752881  108638 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 11:28:03.753244  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.753802  108638 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:28:03.753943  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.754057  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.754075  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.754152  108638 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:28:03.754728  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.754907  108638 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:28:03.754923  108638 master.go:461] Enabling API group "storage.k8s.io".
I0919 11:28:03.755043  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.755119  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.755123  108638 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:28:03.755149  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.755131  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.755636  108638 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 11:28:03.755742  108638 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 11:28:03.755816  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.755917  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.755934  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.756464  108638 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 11:28:03.756588  108638 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 11:28:03.756610  108638 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.756749  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.756766  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.757300  108638 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 11:28:03.757441  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.757554  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.757569  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.757634  108638 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 11:28:03.757917  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.758292  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.758341  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.758633  108638 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 11:28:03.758714  108638 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 11:28:03.758783  108638 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.758925  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.758945  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.759210  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.759565  108638 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 11:28:03.759585  108638 master.go:461] Enabling API group "apps".
I0919 11:28:03.759613  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.759746  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.759761  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.759825  108638 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 11:28:03.760869  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.761221  108638 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:28:03.761239  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.761253  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.761358  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.761374  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.761441  108638 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:28:03.762189  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.762394  108638 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:28:03.762418  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.762509  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.762521  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.762565  108638 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:28:03.763415  108638 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:28:03.763438  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.763524  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.763537  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.763586  108638 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:28:03.763942  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.764133  108638 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:28:03.764156  108638 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 11:28:03.764179  108638 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.764255  108638 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:28:03.764380  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:03.764393  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:03.764583  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.764840  108638 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:28:03.764859  108638 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:28:03.764862  108638 master.go:461] Enabling API group "events.k8s.io".
I0919 11:28:03.765106  108638 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.765304  108638 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.765548  108638 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.765598  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.765632  108638 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.765749  108638 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.765814  108638 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.765964  108638 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.766053  108638 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.766144  108638 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.766218  108638 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.766918  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.767108  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.767716  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.768073  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.768843  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.769135  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.770054  108638 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.771078  108638 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.771630  108638 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.771879  108638 watch_cache.go:405] Replace watchCache (rev: 46484) 
I0919 11:28:03.771875  108638 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:28:03.771992  108638 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 11:28:03.772612  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.772736  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.772912  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.773544  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.774138  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.774817  108638 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.775109  108638 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.775769  108638 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.776300  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.776555  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.777122  108638 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:28:03.777181  108638 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 11:28:03.777790  108638 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.778029  108638 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.778449  108638 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.779039  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.779501  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.779992  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.780500  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.781006  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.781334  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.781840  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.782469  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:28:03.782565  108638 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 11:28:03.783096  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.783614  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:28:03.783717  108638 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 11:28:03.784143  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.784631  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.784836  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.785318  108638 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.785778  108638 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.786248  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.786702  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:28:03.786751  108638 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 11:28:03.787378  108638 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.787934  108638 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.788176  108638 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.788930  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.789151  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.789370  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.789919  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.790120  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.790334  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.790956  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.791182  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.791390  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:28:03.791442  108638 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 11:28:03.791449  108638 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 11:28:03.792018  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.792576  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.793123  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.793721  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.794329  108638 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"aaeea704-0479-4269-8640-d1ad28d4e936", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:28:03.797131  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:03.797290  108638 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 11:28:03.797306  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:03.797315  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:03.797323  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:03.797330  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:03.797370  108638 httplog.go:90] GET /healthz: (364.979µs) 0 [Go-http-client/1.1 127.0.0.1:42244]
I0919 11:28:03.798888  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.880747ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.801318  108638 httplog.go:90] GET /api/v1/services: (1.033045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.805525  108638 httplog.go:90] GET /api/v1/services: (1.184277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.807480  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:03.807503  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:03.807514  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:03.807520  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:03.807527  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:03.807862  108638 httplog.go:90] GET /healthz: (475.197µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.808778  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.325283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42244]
I0919 11:28:03.809221  108638 httplog.go:90] GET /api/v1/services: (834.211µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.810339  108638 httplog.go:90] GET /api/v1/services: (850.317µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.811481  108638 httplog.go:90] POST /api/v1/namespaces: (1.092297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42244]
I0919 11:28:03.812812  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (921.006µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.814559  108638 httplog.go:90] POST /api/v1/namespaces: (1.353829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.816065  108638 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.078203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.817874  108638 httplog.go:90] POST /api/v1/namespaces: (1.353448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.898114  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:03.898154  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:03.898167  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:03.898176  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:03.898184  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:03.898227  108638 httplog.go:90] GET /healthz: (260.676µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 11:28:03.910835  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:03.910870  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:03.910882  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:03.910890  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:03.910897  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:03.910927  108638 httplog.go:90] GET /healthz: (261.26µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:03.998109  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:03.998144  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:03.998156  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:03.998165  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:03.998173  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:03.998206  108638 httplog.go:90] GET /healthz: (240.327µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 11:28:04.008615  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.008670  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.008682  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.008690  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.008697  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.008723  108638 httplog.go:90] GET /healthz: (235.647µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.098148  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.098189  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.098201  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.098210  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.098218  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.098258  108638 httplog.go:90] GET /healthz: (305.396µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 11:28:04.108549  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.108783  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.108878  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.108963  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.109026  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.109209  108638 httplog.go:90] GET /healthz: (789.941µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.198045  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.198072  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.198080  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.198087  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.198092  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.198114  108638 httplog.go:90] GET /healthz: (189.041µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 11:28:04.208537  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.208561  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.208571  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.208578  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.208585  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.208612  108638 httplog.go:90] GET /healthz: (181.562µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.298203  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.298439  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.298700  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.298806  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.298860  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.298996  108638 httplog.go:90] GET /healthz: (975.481µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 11:28:04.308616  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.308691  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.308710  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.308724  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.308733  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.308812  108638 httplog.go:90] GET /healthz: (377.619µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.398114  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.398155  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.398168  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.398178  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.398186  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.398229  108638 httplog.go:90] GET /healthz: (280.055µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 11:28:04.408568  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.408600  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.408612  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.408622  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.408631  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.408689  108638 httplog.go:90] GET /healthz: (269.473µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.498120  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.498343  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.498441  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.498519  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.498590  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.498789  108638 httplog.go:90] GET /healthz: (801.531µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 11:28:04.508628  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.508882  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.508981  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.509036  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.509086  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.509245  108638 httplog.go:90] GET /healthz: (742.353µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.598053  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.598086  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.598101  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.598110  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.598117  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.598146  108638 httplog.go:90] GET /healthz: (230.836µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 11:28:04.608833  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:28:04.608865  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.608877  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.608885  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.608893  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.608920  108638 httplog.go:90] GET /healthz: (222.54µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.618930  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:28:04.619013  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:28:04.699028  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.699073  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.699085  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.699094  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.699138  108638 httplog.go:90] GET /healthz: (1.195587ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 11:28:04.709637  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.709686  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.709697  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.709707  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.709747  108638 httplog.go:90] GET /healthz: (1.27979ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.800191  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.800224  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.800235  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:28:04.800244  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:28:04.800282  108638 httplog.go:90] GET /healthz: (1.542756ms) 0 [Go-http-client/1.1 127.0.0.1:42496]
I0919 11:28:04.800569  108638 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.611271ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.800834  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.801012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.801189  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.837429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42494]
I0919 11:28:04.803888  108638 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.767815ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42494]
I0919 11:28:04.803894  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.057835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.804304  108638 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.478421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42496]
I0919 11:28:04.804457  108638 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 11:28:04.806613  108638 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.839301ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42496]
I0919 11:28:04.806813  108638 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.490544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.806840  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.619051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.810681  108638 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.272717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.810874  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (3.646836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.810981  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.810996  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:28:04.811005  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:04.811033  108638 httplog.go:90] GET /healthz: (2.50247ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:04.811471  108638 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 11:28:04.811485  108638 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 11:28:04.814596  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (3.052749ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 11:28:04.818124  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.686441ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.820876  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.567387ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.822503  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.146662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.823835  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (879.981µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.829463  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (5.090767ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.831830  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.771733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.832041  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 11:28:04.833267  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (938.482µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.836016  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.20187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.836284  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 11:28:04.837431  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.013395ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.839678  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.87586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.840033  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 11:28:04.845542  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (5.361629ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.848025  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.969225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.848263  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:28:04.849528  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (875.847µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.851743  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.67819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.851926  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 11:28:04.853166  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.098286ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.855020  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.601113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.855783  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 11:28:04.857198  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.02794ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.859150  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.406922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.859458  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 11:28:04.860616  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (983.825µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.862618  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.862901  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 11:28:04.863967  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (928.85µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.866902  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.029775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.867184  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 11:28:04.868079  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (728.332µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.872512  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.06232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.873001  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 11:28:04.874270  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.129776ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.876504  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.740957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.876980  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 11:28:04.878048  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (792.213µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.880170  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.710838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.880402  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 11:28:04.881494  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (824.816µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.883154  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.120168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.883310  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 11:28:04.884403  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (926.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.886148  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.222354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.886429  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 11:28:04.887740  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.116195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.889549  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.377798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.889911  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 11:28:04.891500  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.418945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.893419  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.479221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.893948  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 11:28:04.895130  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (908.959µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.897088  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.603094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.897435  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 11:28:04.898552  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.898580  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:04.898610  108638 httplog.go:90] GET /healthz: (659.144µs) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:04.898715  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.138561ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.902027  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.873273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.902370  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:28:04.903696  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (695.975µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.905855  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.608603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.906081  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 11:28:04.906973  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (687.426µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.908903  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.449966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.909120  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 11:28:04.909474  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.912035  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:04.912218  108638 httplog.go:90] GET /healthz: (3.887143ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:04.916415  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (6.974907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.919018  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.051255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.919283  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 11:28:04.928956  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (9.441777ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.931111  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.5689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.931335  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 11:28:04.932692  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (885.176µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.934739  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.699645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.935228  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 11:28:04.936337  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (727.764µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.939006  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.015218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.939701  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:28:04.940738  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (732.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.942845  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.514457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.943396  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 11:28:04.944424  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (722.642µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.946984  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.020106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.947373  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:28:04.948304  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (739.747µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.950179  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.449666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.950633  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 11:28:04.951734  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (918.811µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.953903  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.725593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.954167  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:28:04.955505  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.040436ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.957291  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.385987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.957459  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:28:04.958370  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (720.902µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.960127  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.343719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.960300  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:28:04.961294  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (816.287µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.963347  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.609937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.963566  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:28:04.964692  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (861.051µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.968057  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.018537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.968427  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:28:04.972019  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (3.282247ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.974231  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.668763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.974592  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:28:04.975975  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (888.91µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.978087  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.517618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.978872  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:28:04.980048  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (918.522µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.982089  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.376109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.982460  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:28:04.983901  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.086453ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.985752  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.349671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.986032  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:28:04.987466  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.010179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.990024  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.961216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.990532  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:28:04.992453  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (748.707µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.995467  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.401853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.995843  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:28:04.996987  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (784.44µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:04.999102  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:04.999229  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:04.999472  108638 httplog.go:90] GET /healthz: (1.645474ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:04.999363  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.906789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.000341  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:28:05.001471  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (756.789µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.004524  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.816749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.004995  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:28:05.006324  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.023733ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.009140  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.161976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.009413  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:28:05.010503  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (881.143µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.015054  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.015079  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.015114  108638 httplog.go:90] GET /healthz: (6.638713ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.020865  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.934593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.021307  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:28:05.022464  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (944.813µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.024993  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.999894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.025792  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:28:05.029123  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (3.139174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.031872  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.056566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.032121  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:28:05.033588  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.075489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.035616  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.638269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.035855  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:28:05.037167  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.104436ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.039437  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.73098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.039687  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:28:05.040888  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.041052ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.043527  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.262101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.043870  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:28:05.044788  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (745.079µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.046635  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.462057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.046991  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:28:05.048027  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (857.481µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.049853  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.235107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.050040  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:28:05.050992  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (785.414µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.052616  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.275994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.052995  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:28:05.053888  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (737.572µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.055538  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.341445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.055851  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:28:05.057224  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.093994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.059230  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.40807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.059502  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:28:05.078729  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.223872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.100374  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.843365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.100776  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:28:05.106059  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.106091  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.106140  108638 httplog.go:90] GET /healthz: (6.689493ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:05.109253  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.109278  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.109313  108638 httplog.go:90] GET /healthz: (976.817µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.129071  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (11.012373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.139807  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.302911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.140032  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 11:28:05.158713  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.161106ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.179731  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.235423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.179965  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 11:28:05.199116  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.648895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.200738  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.200763  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.200794  108638 httplog.go:90] GET /healthz: (1.546219ms) 0 [Go-http-client/1.1 127.0.0.1:42248]
I0919 11:28:05.209574  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.209603  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.209650  108638 httplog.go:90] GET /healthz: (1.229207ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.222006  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.82961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.222322  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 11:28:05.238762  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.222813ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.259546  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.043838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.259978  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:28:05.278630  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.189407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.299389  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.299420  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.299457  108638 httplog.go:90] GET /healthz: (778.219µs) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:05.299801  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.204136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.300017  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 11:28:05.309181  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.309208  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.309243  108638 httplog.go:90] GET /healthz: (863.438µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.318876  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.025646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.339466  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.964232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.340365  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:28:05.358847  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.349029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.379762  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.136084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.380167  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 11:28:05.399360  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.832056ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.399528  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.399547  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.399585  108638 httplog.go:90] GET /healthz: (1.744172ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:05.410096  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.410119  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.410157  108638 httplog.go:90] GET /healthz: (1.725264ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.419934  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.490081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.420170  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:28:05.438960  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.497531ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.463161  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.662886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.463393  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:28:05.478915  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.296488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.499612  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.09585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.499622  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.499726  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.499765  108638 httplog.go:90] GET /healthz: (1.316765ms) 0 [Go-http-client/1.1 127.0.0.1:42248]
I0919 11:28:05.499936  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 11:28:05.509991  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.510042  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.510081  108638 httplog.go:90] GET /healthz: (971.014µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.518456  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.048533ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.539342  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.812493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.539767  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:28:05.558701  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.143513ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.579259  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.811728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.579594  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:28:05.598997  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.599026  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.599065  108638 httplog.go:90] GET /healthz: (1.207269ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:05.599155  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.639732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.609276  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.609307  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.609342  108638 httplog.go:90] GET /healthz: (893.056µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.619262  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.79797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.619502  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:28:05.638570  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.087293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.659688  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.08356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.660147  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:28:05.678536  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.040351ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.699176  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.699209  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.699248  108638 httplog.go:90] GET /healthz: (869.726µs) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:05.699659  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.146834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.700094  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:28:05.709476  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.709504  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.709532  108638 httplog.go:90] GET /healthz: (1.097307ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.719344  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.821737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.739625  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.106155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.740045  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:28:05.758947  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.416692ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.779552  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.047153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.779773  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:28:05.798573  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.798778  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.799003  108638 httplog.go:90] GET /healthz: (1.150397ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:05.799099  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.630465ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.809419  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.809769  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.809996  108638 httplog.go:90] GET /healthz: (1.583758ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.819381  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.918602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.819603  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:28:05.838622  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.092591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.859592  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.076541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.859830  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:28:05.879089  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.484594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.900221  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.900255  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.900294  108638 httplog.go:90] GET /healthz: (1.748197ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:05.900469  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.95197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:05.901113  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:28:05.909274  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.909307  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.909342  108638 httplog.go:90] GET /healthz: (963.024µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.918666  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.166397ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.939544  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.015291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.939819  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:28:05.958417  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (930.378µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.979750  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.170981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.980246  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:28:05.998995  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.396188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:05.999145  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:05.999163  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:05.999190  108638 httplog.go:90] GET /healthz: (1.316337ms) 0 [Go-http-client/1.1 127.0.0.1:42248]
I0919 11:28:06.009125  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.009297  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.009423  108638 httplog.go:90] GET /healthz: (1.037808ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.020008  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.439425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.020291  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:28:06.038550  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.155787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.060006  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.133447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.060481  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:28:06.078551  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.066123ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.098559  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.098592  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.098627  108638 httplog.go:90] GET /healthz: (823.979µs) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:06.099387  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.887612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.099621  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:28:06.109306  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.109358  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.109401  108638 httplog.go:90] GET /healthz: (987.137µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.118424  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (970.059µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.139292  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.844003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.139715  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:28:06.158673  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.14102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.179744  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.161453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.179979  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:28:06.198911  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.199440  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.199495  108638 httplog.go:90] GET /healthz: (1.604025ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:06.199274  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.709651ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.209412  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.209587  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.209779  108638 httplog.go:90] GET /healthz: (1.315005ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.219709  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.182085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.220586  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:28:06.238661  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.115341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.260537  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.888314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.260789  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:28:06.278464  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (987.472µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.299211  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.299240  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.299267  108638 httplog.go:90] GET /healthz: (897.038µs) 0 [Go-http-client/1.1 127.0.0.1:42248]
I0919 11:28:06.299861  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.3295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.300071  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:28:06.309176  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.309203  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.309244  108638 httplog.go:90] GET /healthz: (822.781µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.318482  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.042824ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.339871  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.340907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.340049  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:28:06.358829  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.240353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.379830  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.291365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.380071  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:28:06.398972  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.404899ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.399027  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.399050  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.399082  108638 httplog.go:90] GET /healthz: (1.270293ms) 0 [Go-http-client/1.1 127.0.0.1:42248]
I0919 11:28:06.409564  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.409724  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.409864  108638 httplog.go:90] GET /healthz: (1.410372ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.420036  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.410601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.420319  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:28:06.438805  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.196182ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.463144  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.448843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.463423  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:28:06.493298  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (15.757692ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.498930  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.499097  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.499344  108638 httplog.go:90] GET /healthz: (1.379504ms) 0 [Go-http-client/1.1 127.0.0.1:42248]
I0919 11:28:06.500111  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.710663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.500397  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:28:06.518332  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.518576  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.518863  108638 httplog.go:90] GET /healthz: (10.373058ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.521382  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (3.931741ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.540634  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.134584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.541610  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:28:06.559234  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.447349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.561066  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.282891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.579506  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.95978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.579775  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 11:28:06.598954  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.598984  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.599013  108638 httplog.go:90] GET /healthz: (846.338µs) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:06.599064  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.538863ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.600497  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.082517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.610857  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.610894  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.610937  108638 httplog.go:90] GET /healthz: (1.038194ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.619431  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.875332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.619809  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:28:06.638945  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.383955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.640968  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.575602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.659626  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.11914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.659869  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:28:06.678619  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.191152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.680893  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.53214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.699456  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.699697  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.699887  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.338019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.700441  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:28:06.700759  108638 httplog.go:90] GET /healthz: (2.939448ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:06.709431  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.709463  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.709506  108638 httplog.go:90] GET /healthz: (1.091121ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.718755  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.109511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.722564  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.194342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.739487  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.891847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.739929  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:28:06.758857  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.276316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.760426  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.143532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.779508  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.973505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.779897  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:28:06.799154  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.573396ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42498]
I0919 11:28:06.799462  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.799498  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.799547  108638 httplog.go:90] GET /healthz: (1.580738ms) 0 [Go-http-client/1.1 127.0.0.1:42248]
I0919 11:28:06.801055  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.040543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.809389  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.809589  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.809828  108638 httplog.go:90] GET /healthz: (1.346857ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.819586  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.174548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.819935  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:28:06.838934  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.402723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.841006  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.401063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.859613  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.038351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.859948  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:28:06.879979  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.259261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.882088  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.706045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.899211  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.899423  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.899628  108638 httplog.go:90] GET /healthz: (1.72006ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:06.899446  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.937095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.900192  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 11:28:06.909358  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.909514  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.909667  108638 httplog.go:90] GET /healthz: (1.236522ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.918543  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.100773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.919997  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.023578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.939357  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.896784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.939570  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:28:06.958741  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.154892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.960461  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.272621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.979688  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.088933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.980101  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:28:06.999148  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.531828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:06.999342  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:06.999478  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:06.999694  108638 httplog.go:90] GET /healthz: (1.823951ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:07.001405  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.445803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.009228  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:07.009406  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:07.009540  108638 httplog.go:90] GET /healthz: (1.144841ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.019485  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.990166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.019903  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:28:07.038752  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.223599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.040581  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.290601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.059451  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.935342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.060021  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:28:07.078801  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.290105ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.080312  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.066829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.099158  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:28:07.099222  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:28:07.099265  108638 httplog.go:90] GET /healthz: (1.333798ms) 0 [Go-http-client/1.1 127.0.0.1:42498]
I0919 11:28:07.099955  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.410722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.100354  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:28:07.109395  108638 httplog.go:90] GET /healthz: (926.945µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.110850  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.079645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.115241  108638 httplog.go:90] POST /api/v1/namespaces: (2.706711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.117315  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.597548ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.121944  108638 httplog.go:90] POST /api/v1/namespaces/default/services: (4.195963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.123129  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (905.206µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.124137  108638 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (632.42µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
E0919 11:28:07.124531  108638 controller.go:224] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0919 11:28:07.199617  108638 httplog.go:90] GET /healthz: (1.158649ms) 200 [Go-http-client/1.1 127.0.0.1:42248]
I0919 11:28:07.202631  108638 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.118804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
W0919 11:28:07.203094  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203156  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203171  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203202  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203218  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203230  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203239  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203250  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203259  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203269  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:28:07.203328  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:28:07.204796  108638 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.21316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.206269  108638 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 11:28:07.206329  108638 factory.go:321] Registering predicate: PredicateOne
I0919 11:28:07.206339  108638 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 11:28:07.206346  108638 factory.go:321] Registering predicate: PredicateTwo
I0919 11:28:07.206350  108638 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 11:28:07.206356  108638 factory.go:336] Registering priority: PriorityOne
I0919 11:28:07.206364  108638 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 11:28:07.206376  108638 factory.go:336] Registering priority: PriorityTwo
I0919 11:28:07.206381  108638 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 11:28:07.206389  108638 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 11:28:07.208122  108638 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.275905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
W0919 11:28:07.208353  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:28:07.209691  108638 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (1.041456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.209967  108638 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 11:28:07.209997  108638 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 11:28:07.210010  108638 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 11:28:07.210016  108638 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 11:28:07.215248  108638 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (4.813161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
W0919 11:28:07.216161  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:28:07.217756  108638 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (1.246585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.218081  108638 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 11:28:07.218112  108638 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 11:28:07.220152  108638 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.665402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
W0919 11:28:07.220431  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:28:07.221553  108638 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (812.345µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.221998  108638 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 11:28:07.222037  108638 factory.go:321] Registering predicate: PredicateOne
I0919 11:28:07.222047  108638 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 11:28:07.222054  108638 factory.go:321] Registering predicate: PredicateTwo
I0919 11:28:07.222060  108638 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 11:28:07.222066  108638 factory.go:336] Registering priority: PriorityOne
I0919 11:28:07.222074  108638 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 11:28:07.222101  108638 factory.go:336] Registering priority: PriorityTwo
I0919 11:28:07.222119  108638 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 11:28:07.222128  108638 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 11:28:07.223732  108638 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.258635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
W0919 11:28:07.224004  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:28:07.225421  108638 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (923.714µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.225720  108638 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 11:28:07.225749  108638 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 11:28:07.225762  108638 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 11:28:07.225768  108638 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 11:28:07.400503  108638 request.go:538] Throttling request took 174.398098ms, request: POST:http://127.0.0.1:38033/api/v1/namespaces/kube-system/configmaps
I0919 11:28:07.402977  108638 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.204198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
W0919 11:28:07.403302  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:28:07.600482  108638 request.go:538] Throttling request took 196.854819ms, request: GET:http://127.0.0.1:38033/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I0919 11:28:07.602584  108638 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (1.624982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.603115  108638 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 11:28:07.603145  108638 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 11:28:07.800513  108638 request.go:538] Throttling request took 197.121081ms, request: DELETE:http://127.0.0.1:38033/api/v1/nodes
I0919 11:28:07.802355  108638 httplog.go:90] DELETE /api/v1/nodes: (1.537946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
I0919 11:28:07.802702  108638 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 11:28:07.804134  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.081954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42248]
--- FAIL: TestSchedulerCreationFromConfigMap (4.19s)
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-111826.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m20s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0919 11:28:58.783285  108638 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (140.23s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-111826.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0919 11:30:08.983842  108638 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 11:30:08.983877  108638 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 11:30:08.983892  108638 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 11:30:08.983901  108638 master.go:259] Using reconciler: 
I0919 11:30:08.985505  108638 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.985775  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.985859  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:08.986628  108638 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 11:30:08.986695  108638 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.986762  108638 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 11:30:08.987046  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.987068  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:08.987796  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:08.988081  108638 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:30:08.988128  108638 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:30:08.988123  108638 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.988428  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.988607  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:08.988851  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:08.989360  108638 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 11:30:08.989385  108638 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 11:30:08.989559  108638 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.989904  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.990010  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:08.990068  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:08.990796  108638 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 11:30:08.990849  108638 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 11:30:08.990942  108638 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.991267  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.991347  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:08.991553  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:08.992171  108638 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 11:30:08.992223  108638 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 11:30:08.992338  108638 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.992535  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.992568  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:08.992759  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:08.993432  108638 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 11:30:08.993477  108638 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 11:30:08.993601  108638 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.994042  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.994071  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:08.994285  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:08.994760  108638 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 11:30:08.994881  108638 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 11:30:08.994909  108638 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.995110  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.995133  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:08.995721  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:08.996144  108638 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 11:30:08.996224  108638 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 11:30:08.996451  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.996773  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.996860  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:08.997246  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:08.997807  108638 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 11:30:08.998086  108638 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:08.997839  108638 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 11:30:08.998841  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:08.999055  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:08.999411  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.000121  108638 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 11:30:09.000199  108638 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 11:30:09.000255  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.000508  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.000531  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.001024  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.001125  108638 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 11:30:09.001244  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.001404  108638 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 11:30:09.001423  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.001443  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.002012  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.002215  108638 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 11:30:09.002322  108638 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 11:30:09.002370  108638 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.002617  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.002675  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.003319  108638 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 11:30:09.003354  108638 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 11:30:09.003428  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.003674  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.003761  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.003696  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.004043  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.004252  108638 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 11:30:09.004293  108638 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 11:30:09.004290  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.004578  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.004604  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.005201  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.005227  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.005242  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.005869  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.006054  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.006074  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.006626  108638 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 11:30:09.006783  108638 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 11:30:09.006794  108638 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 11:30:09.007578  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.007713  108638 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.008213  108638 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.008781  108638 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.009237  108638 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.009742  108638 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.010306  108638 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.010596  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.010707  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.010847  108638 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.011174  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.011550  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.011727  108638 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.012276  108638 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.012448  108638 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.012804  108638 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.012945  108638 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.013383  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.013505  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.013585  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.013683  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.013792  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.013873  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.013984  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.014488  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.014741  108638 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.015302  108638 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.015796  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.015976  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.016133  108638 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.016708  108638 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.016897  108638 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.017422  108638 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.017953  108638 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.018427  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.018939  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.019123  108638 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.019203  108638 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 11:30:09.019221  108638 master.go:461] Enabling API group "authentication.k8s.io".
I0919 11:30:09.019232  108638 master.go:461] Enabling API group "authorization.k8s.io".
I0919 11:30:09.019342  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.019572  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.019599  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.020446  108638 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:30:09.020520  108638 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:30:09.020705  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.020898  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.020926  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.021735  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.021771  108638 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:30:09.021789  108638 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:30:09.021980  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.022169  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.022191  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.022661  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.022849  108638 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:30:09.022884  108638 master.go:461] Enabling API group "autoscaling".
I0919 11:30:09.023041  108638 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.023151  108638 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:30:09.023253  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.023326  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.023818  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.023907  108638 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 11:30:09.023960  108638 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 11:30:09.024078  108638 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.024276  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.024311  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.024792  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.025015  108638 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 11:30:09.025054  108638 master.go:461] Enabling API group "batch".
I0919 11:30:09.025111  108638 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 11:30:09.025216  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.025418  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.025451  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.025886  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.025989  108638 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 11:30:09.026080  108638 master.go:461] Enabling API group "certificates.k8s.io".
I0919 11:30:09.026317  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.026572  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.026673  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.026749  108638 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 11:30:09.027344  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.027627  108638 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:30:09.027685  108638 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:30:09.027792  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.027982  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.028008  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.028229  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.028540  108638 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:30:09.028561  108638 master.go:461] Enabling API group "coordination.k8s.io".
I0919 11:30:09.028576  108638 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 11:30:09.028595  108638 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:30:09.028742  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.028924  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.028952  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.029282  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.029567  108638 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:30:09.029590  108638 master.go:461] Enabling API group "extensions".
I0919 11:30:09.029611  108638 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:30:09.029722  108638 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.029878  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.029899  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.030522  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.030710  108638 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 11:30:09.030769  108638 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 11:30:09.030907  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.031110  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.031171  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.031666  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.031854  108638 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:30:09.031910  108638 master.go:461] Enabling API group "networking.k8s.io".
I0919 11:30:09.031943  108638 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:30:09.031965  108638 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.032230  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.032248  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.032620  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.032683  108638 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 11:30:09.032700  108638 master.go:461] Enabling API group "node.k8s.io".
I0919 11:30:09.032786  108638 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 11:30:09.032817  108638 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.032969  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.032991  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.033395  108638 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 11:30:09.033433  108638 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 11:30:09.033502  108638 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.033517  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.033709  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.033724  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.034025  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.034186  108638 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 11:30:09.034201  108638 master.go:461] Enabling API group "policy".
I0919 11:30:09.034231  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.034245  108638 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 11:30:09.034365  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.034378  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.034867  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.035072  108638 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:30:09.035100  108638 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:30:09.035176  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.035335  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.035353  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.035859  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.036037  108638 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:30:09.036105  108638 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:30:09.036092  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.036349  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.036378  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.036775  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.037196  108638 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:30:09.037255  108638 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:30:09.037328  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.037518  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.037541  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.038682  108638 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:30:09.038704  108638 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:30:09.038854  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.039425  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.040242  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.040343  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.041075  108638 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:30:09.041101  108638 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:30:09.041352  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.041850  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.041921  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.042026  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.042499  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.042740  108638 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:30:09.042785  108638 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:30:09.042780  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.042997  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.043022  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.043456  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.043660  108638 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:30:09.043748  108638 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:30:09.043874  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.044129  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.044223  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.044492  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.044931  108638 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:30:09.044961  108638 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 11:30:09.045048  108638 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:30:09.045754  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.046238  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.046445  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.046473  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.047068  108638 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:30:09.047123  108638 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:30:09.047171  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.047312  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.047333  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.047732  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.047876  108638 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:30:09.047893  108638 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 11:30:09.048105  108638 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 11:30:09.048122  108638 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:30:09.048260  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.048402  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.048424  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.048835  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.048949  108638 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:30:09.049023  108638 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:30:09.049091  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.049276  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.049291  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.049698  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.049928  108638 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:30:09.049959  108638 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.050036  108638 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:30:09.050112  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.050144  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.050632  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.051060  108638 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 11:30:09.051099  108638 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 11:30:09.051181  108638 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.051709  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.051932  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.052223  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.052843  108638 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 11:30:09.052923  108638 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 11:30:09.052969  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.053116  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.053153  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.053732  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.053753  108638 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:30:09.053771  108638 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:30:09.054102  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.054262  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.054279  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.054871  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.055178  108638 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:30:09.055192  108638 master.go:461] Enabling API group "storage.k8s.io".
I0919 11:30:09.055317  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.055347  108638 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:30:09.055486  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.055499  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.055885  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.056185  108638 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 11:30:09.056213  108638 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 11:30:09.056479  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.056791  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.056872  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.057084  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.057495  108638 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 11:30:09.057536  108638 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 11:30:09.057686  108638 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.057888  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.057913  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.058582  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.058746  108638 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 11:30:09.058842  108638 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 11:30:09.058877  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.059094  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.059121  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.059609  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.059735  108638 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 11:30:09.059784  108638 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 11:30:09.059876  108638 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.060373  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.060407  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.060551  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.061014  108638 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 11:30:09.061036  108638 master.go:461] Enabling API group "apps".
I0919 11:30:09.061065  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.061090  108638 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 11:30:09.061307  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.061331  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.062263  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.062288  108638 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:30:09.062315  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.062342  108638 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:30:09.062473  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.062497  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.063026  108638 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:30:09.063067  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.063119  108638 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:30:09.063162  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.063409  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.063531  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.063716  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.064337  108638 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:30:09.064364  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.064427  108638 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:30:09.064526  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.064550  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.065239  108638 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:30:09.065262  108638 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 11:30:09.065285  108638 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.065332  108638 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:30:09.065244  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.065560  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.065591  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.065982  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.066120  108638 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:30:09.066140  108638 master.go:461] Enabling API group "events.k8s.io".
I0919 11:30:09.066203  108638 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:30:09.066297  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.066315  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.066353  108638 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.066610  108638 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.066931  108638 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.067045  108638 watch_cache.go:405] Replace watchCache (rev: 59519) 
I0919 11:30:09.067073  108638 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.067186  108638 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.067290  108638 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.067483  108638 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.067598  108638 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.067728  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.067728  108638 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.067867  108638 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.068004  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.068631  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.068894  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.068889  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.069136  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.069488  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.069726  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.070423  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.070718  108638 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.071448  108638 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.071715  108638 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.072265  108638 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.072377  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.072484  108638 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:30:09.072547  108638 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 11:30:09.073062  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.073194  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.073369  108638 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.073935  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.074437  108638 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.075074  108638 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.075283  108638 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.075868  108638 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.076383  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.076600  108638 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.077144  108638 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:30:09.077201  108638 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 11:30:09.077801  108638 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.078017  108638 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.078467  108638 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.078964  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.079365  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.079850  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.080350  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.080819  108638 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.081216  108638 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.081807  108638 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.082267  108638 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:30:09.082324  108638 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 11:30:09.082774  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.083199  108638 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:30:09.083251  108638 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 11:30:09.083674  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.084121  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.084352  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.084842  108638 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.085263  108638 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.085628  108638 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.086112  108638 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:30:09.086167  108638 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 11:30:09.086758  108638 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.087309  108638 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.087559  108638 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.088105  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.088362  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.088637  108638 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.089219  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.089433  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.089710  108638 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.090302  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.090537  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.090814  108638 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:30:09.090872  108638 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 11:30:09.090880  108638 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 11:30:09.091410  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.091889  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.092443  108638 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.092949  108638 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.093625  108638 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fbbe6906-7e92-4457-97e8-a505a3086ac8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:30:09.096322  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.096343  108638 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 11:30:09.096350  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.096358  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.096364  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.096369  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.096391  108638 httplog.go:90] GET /healthz: (155.454µs) 0 [Go-http-client/1.1 127.0.0.1:48520]
I0919 11:30:09.097512  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.276691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.099669  108638 httplog.go:90] GET /api/v1/services: (1.036659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.102520  108638 httplog.go:90] GET /api/v1/services: (716.96µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.104330  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.104410  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.104447  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.104479  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.104512  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.104623  108638 httplog.go:90] GET /healthz: (366.174µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.105422  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (947.247µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48520]
I0919 11:30:09.106000  108638 httplog.go:90] GET /api/v1/services: (891.463µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.106056  108638 httplog.go:90] GET /api/v1/services: (929.5µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48526]
I0919 11:30:09.107042  108638 httplog.go:90] POST /api/v1/namespaces: (1.185949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48520]
I0919 11:30:09.108039  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (673.692µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.109341  108638 httplog.go:90] POST /api/v1/namespaces: (1.007136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.110268  108638 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (644.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.111749  108638 httplog.go:90] POST /api/v1/namespaces: (1.019232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.197036  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.197068  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.197078  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.197084  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.197091  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.197118  108638 httplog.go:90] GET /healthz: (206.673µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:09.205512  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.205544  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.205559  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.205567  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.205574  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.205610  108638 httplog.go:90] GET /healthz: (204.065µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.297125  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.297159  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.297171  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.297180  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.297203  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.297255  108638 httplog.go:90] GET /healthz: (302.198µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:09.306069  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.306103  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.306115  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.306122  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.306128  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.306163  108638 httplog.go:90] GET /healthz: (226.002µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.397617  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.397668  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.397678  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.397684  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.397690  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.397745  108638 httplog.go:90] GET /healthz: (789.746µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:09.405510  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.405543  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.405555  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.405564  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.405572  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.405603  108638 httplog.go:90] GET /healthz: (199.966µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.481690  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.481702  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.482728  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.483676  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.483969  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.484176  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.497214  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.497239  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.497248  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.497255  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.497261  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.497293  108638 httplog.go:90] GET /healthz: (239.397µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:09.505486  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.505509  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.505518  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.505523  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.505528  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.505553  108638 httplog.go:90] GET /healthz: (155.244µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.549729  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.549891  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.550123  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.550583  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.551638  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.551743  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.597094  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.597129  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.597138  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.597145  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.597152  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.597185  108638 httplog.go:90] GET /healthz: (210.404µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:09.605486  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.605519  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.605531  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.605540  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.605548  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.605578  108638 httplog.go:90] GET /healthz: (226.219µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.691099  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.697179  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.697208  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.697217  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.697224  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.697231  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.697263  108638 httplog.go:90] GET /healthz: (208.614µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:09.705511  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.705669  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.705722  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.705749  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.705776  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.705891  108638 httplog.go:90] GET /healthz: (487.484µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.755343  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:09.797124  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.797157  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.797170  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.797179  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.797187  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.797219  108638 httplog.go:90] GET /healthz: (247.02µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:09.805548  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.805579  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.805588  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.805595  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.805601  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.805631  108638 httplog.go:90] GET /healthz: (199.115µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.897091  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.897147  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.897159  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.897168  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.897177  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.897223  108638 httplog.go:90] GET /healthz: (249.475µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:09.905612  108638 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:30:09.905722  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.905736  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.905746  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.905754  108638 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.905794  108638 httplog.go:90] GET /healthz: (332.046µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:09.983761  108638 client.go:361] parsed scheme: "endpoint"
I0919 11:30:09.983891  108638 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:30:09.998267  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:09.998445  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:09.998482  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:09.998529  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:09.998676  108638 httplog.go:90] GET /healthz: (1.580494ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:10.006378  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.006496  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:10.006523  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:10.006547  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:10.006701  108638 httplog.go:90] GET /healthz: (1.267237ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.066460  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.066597  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.067869  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.068183  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.069033  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.069284  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.072592  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.097747  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.326914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.097753  108638 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.282783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.097964  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.097994  108638 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:30:10.098004  108638 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:30:10.098012  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:30:10.098062  108638 httplog.go:90] GET /healthz: (1.002013ms) 0 [Go-http-client/1.1 127.0.0.1:48532]
I0919 11:30:10.098999  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (836.448µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.099194  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.00991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48534]
I0919 11:30:10.099695  108638 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.49882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.099869  108638 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 11:30:10.100435  108638 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (876.834µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.100747  108638 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (695.274µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.101038  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (810.344µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48532]
I0919 11:30:10.102415  108638 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.332343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.102436  108638 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.513029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.102579  108638 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 11:30:10.102604  108638 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 11:30:10.102840  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.411852ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48532]
I0919 11:30:10.103929  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (738.151µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.104888  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (698.198µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.105872  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.105904  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.105934  108638 httplog.go:90] GET /healthz: (655.798µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.106316  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (982.688µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.107374  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (653.027µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.108563  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (792.235µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.109555  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (702.286µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.111382  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.111561  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 11:30:10.112368  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (623.509µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.113894  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.21966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.115241  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 11:30:10.116241  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (794.916µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.118061  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.415536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.118358  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 11:30:10.119305  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (764.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.120954  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.161302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.121164  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:30:10.122072  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (670.166µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.123581  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.160991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.123792  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 11:30:10.124605  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (622.004µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.126072  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.144768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.126263  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 11:30:10.127118  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (676.284µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.128623  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.17582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.128832  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 11:30:10.129761  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (766.979µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.131319  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.141773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.131533  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 11:30:10.132454  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (662.035µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.134110  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.270087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.134447  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 11:30:10.135361  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (712.499µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.137474  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.701451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.137718  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 11:30:10.138827  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (870.62µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.140369  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.215291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.140537  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 11:30:10.141533  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (737.322µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.143598  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.588278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.143912  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 11:30:10.145103  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (968.072µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.146579  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.179797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.146895  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 11:30:10.147986  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (885.385µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.149458  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.122887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.149670  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 11:30:10.150516  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (672.476µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.152047  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.165424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.152190  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 11:30:10.152983  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (645.269µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.154573  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.255765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.154749  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 11:30:10.155542  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (644.257µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.157374  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.365388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.157565  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 11:30:10.158419  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (653.841µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.160005  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.214854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.160237  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:30:10.161097  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (670.869µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.162717  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.224624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.162935  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 11:30:10.163872  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (766.187µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.165946  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.716247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.166187  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 11:30:10.167174  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (755.517µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.168899  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.268277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.169103  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 11:30:10.170267  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (854.179µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.171843  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.143073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.172030  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 11:30:10.172992  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (782.754µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.174475  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.097266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.174680  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 11:30:10.175585  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (674.183µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.177154  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.181899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.177345  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:30:10.178206  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (700.804µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.179832  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.257777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.180071  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 11:30:10.180977  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (658.596µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.182632  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.37253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.182935  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:30:10.183904  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (710.721µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.185680  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.360705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.185901  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 11:30:10.186750  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (613.455µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.188275  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.105978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.188453  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:30:10.189285  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (667.62µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.190843  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.236693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.191043  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:30:10.191985  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (730.171µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.193637  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.248028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.193889  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:30:10.194686  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (636.355µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.196152  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.063578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.196391  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:30:10.197285  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (732.888µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.197534  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.197632  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.197704  108638 httplog.go:90] GET /healthz: (835.982µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:10.199034  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.100132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.199241  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:30:10.200045  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (636.648µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.201588  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.167057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.201841  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:30:10.202742  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (703.896µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.204280  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.167342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.204495  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:30:10.205404  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (700.938µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.205922  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.205942  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.205964  108638 httplog.go:90] GET /healthz: (649.673µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.207139  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.334633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.207367  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:30:10.208317  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (725.508µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.209772  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.072248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.210010  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:30:10.210913  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (710.965µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.212596  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.244644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.212913  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:30:10.213805  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (720.746µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.215700  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.477361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.215991  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:30:10.216968  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (764.434µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.218419  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.093152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.218714  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:30:10.219567  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (642.183µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.221232  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.236588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.221460  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:30:10.222323  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (625.97µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.224116  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.302343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.224391  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:30:10.225306  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (727.726µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.226770  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.139858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.226964  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:30:10.227825  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (689.447µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.229361  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.217763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.229628  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:30:10.230775  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (827.221µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.232403  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.270919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.232620  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:30:10.233522  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (657.237µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.234964  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.091638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.235206  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:30:10.236172  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (716.292µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.237980  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.433463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.238130  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:30:10.238961  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (666.386µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.240606  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.333461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.240926  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:30:10.242028  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (809.369µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.243801  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.357264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.243992  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:30:10.245051  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (867.52µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.246717  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.229243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.246902  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:30:10.257686  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.176071ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.278909  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.206627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.279171  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:30:10.297618  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.297667  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.297703  108638 httplog.go:90] GET /healthz: (778.537µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:10.297942  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.307688ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.306308  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.306336  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.306372  108638 httplog.go:90] GET /healthz: (968.59µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.318534  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.909422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.318875  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:30:10.337613  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.033267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.358556  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.930289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.358939  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:30:10.377720  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.134181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.397966  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.398150  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.398390  108638 httplog.go:90] GET /healthz: (1.464331ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:10.398453  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.899587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.398947  108638 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:30:10.406411  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.406669  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.406869  108638 httplog.go:90] GET /healthz: (1.438529ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.418088  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.472779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.438695  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.948713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.438922  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 11:30:10.458093  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.464253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.478747  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.107552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.479029  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 11:30:10.481965  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.481982  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.482873  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.483795  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.484145  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.484420  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.497586  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.013009ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.497630  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.497915  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.498025  108638 httplog.go:90] GET /healthz: (1.083002ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:10.506410  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.506442  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.506522  108638 httplog.go:90] GET /healthz: (1.100952ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.518398  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.798072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.518580  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 11:30:10.537766  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.227632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.549903  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.550071  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.550200  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.550861  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.551900  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.551905  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.558668  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.996086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.559021  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:30:10.577843  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.255858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.598073  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.598116  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.598152  108638 httplog.go:90] GET /healthz: (1.13365ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:10.598559  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.850013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.598803  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 11:30:10.606279  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.606425  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.606588  108638 httplog.go:90] GET /healthz: (1.188119ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.617617  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.069334ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.638536  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.915118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.638867  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:30:10.657883  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.315714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.678533  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.909628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.678985  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 11:30:10.691292  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.698152  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.423375ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:10.698414  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.698446  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.698513  108638 httplog.go:90] GET /healthz: (1.43792ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:10.706476  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.706507  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.706544  108638 httplog.go:90] GET /healthz: (1.175725ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.718370  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.769251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.718705  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:30:10.737989  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.387477ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.755514  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:10.758585  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.960813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.758860  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:30:10.777689  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.121644ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.797983  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.798019  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.798071  108638 httplog.go:90] GET /healthz: (1.082974ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:10.798725  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.987087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.798917  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 11:30:10.806479  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.806620  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.806768  108638 httplog.go:90] GET /healthz: (1.290192ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.817896  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.360701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.838198  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.556468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.838562  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:30:10.857619  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.03894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.878519  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.941828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.878800  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:30:10.897904  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.282009ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.897905  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.897977  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.898026  108638 httplog.go:90] GET /healthz: (1.098882ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:10.906425  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.906460  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.906683  108638 httplog.go:90] GET /healthz: (1.119851ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.918308  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.737637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.918623  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:30:10.937952  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.369018ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.958852  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.156947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.959148  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:30:10.977876  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.220826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.998760  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:10.998900  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:10.999012  108638 httplog.go:90] GET /healthz: (2.079109ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:10.998774  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.122266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:10.999291  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:30:11.006281  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.006421  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.006525  108638 httplog.go:90] GET /healthz: (1.110396ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.017867  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.248654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.038609  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.005333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.038877  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:30:11.057987  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.308876ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.066688  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.066734  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.068059  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.068277  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.069183  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.069448  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.072822  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.078538  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.021759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.078787  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:30:11.097869  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.098023  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.098164  108638 httplog.go:90] GET /healthz: (1.292912ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:11.097982  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.332216ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.106084  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.106181  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.106223  108638 httplog.go:90] GET /healthz: (831.94µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.118418  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.887718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.118683  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:30:11.137705  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.128542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.158627  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.979721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.158966  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:30:11.177829  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.27489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.197954  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.197982  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.198013  108638 httplog.go:90] GET /healthz: (1.065797ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:11.198853  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.257894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.199073  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:30:11.206345  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.206378  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.206437  108638 httplog.go:90] GET /healthz: (1.059044ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.218015  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.395702ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.238608  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.025558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.238973  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:30:11.258042  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.33133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.278569  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.990189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.278818  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:30:11.297832  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.297902  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.297939  108638 httplog.go:90] GET /healthz: (989.118µs) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:11.298213  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.597554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.306241  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.306368  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.306526  108638 httplog.go:90] GET /healthz: (1.119535ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.318370  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.675748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.318595  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:30:11.337964  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.324954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.359012  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.259748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.359362  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:30:11.377879  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.319824ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.397955  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.397997  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.398041  108638 httplog.go:90] GET /healthz: (1.129919ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:11.398633  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.949154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.398911  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:30:11.406206  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.406238  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.406271  108638 httplog.go:90] GET /healthz: (899.128µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.417622  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.049963ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.438439  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.857777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.438657  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:30:11.457866  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.249704ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.478443  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.85517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.478706  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:30:11.482201  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.482202  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.483021  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.484127  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.484327  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.484602  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.497957  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.498101  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.493817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:11.498110  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.498292  108638 httplog.go:90] GET /healthz: (1.292956ms) 0 [Go-http-client/1.1 127.0.0.1:48522]
I0919 11:30:11.506526  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.506554  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.506587  108638 httplog.go:90] GET /healthz: (1.109026ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.518392  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.813946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.518617  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:30:11.537815  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.229648ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.550099  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.550220  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.550464  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.551005  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.552173  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.552335  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.558751  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.150432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.559023  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:30:11.578101  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.3436ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.598036  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.598456  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.598700  108638 httplog.go:90] GET /healthz: (1.779303ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:11.598516  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.942096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.599235  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:30:11.606798  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.606834  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.606886  108638 httplog.go:90] GET /healthz: (1.426062ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.618120  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.297942ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.638434  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.796671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.638674  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:30:11.658005  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.414159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.678439  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.536776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.678782  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:30:11.691514  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.698082  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.698111  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.698152  108638 httplog.go:90] GET /healthz: (1.236799ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:11.698209  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.603054ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.706270  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.706396  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.706513  108638 httplog.go:90] GET /healthz: (1.074903ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.718625  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.095485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.718857  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:30:11.737979  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.422024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.755722  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:11.758494  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.939194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.758774  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:30:11.777908  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.285204ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.798019  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.798046  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.798080  108638 httplog.go:90] GET /healthz: (1.096499ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:11.799452  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.840481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.799666  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:30:11.806370  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.806401  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.806465  108638 httplog.go:90] GET /healthz: (1.046124ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.817915  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.239234ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.838824  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.081267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.839137  108638 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:30:11.857909  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.266635ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.859698  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.226367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.878538  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.930925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.878805  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 11:30:11.897978  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.375035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.898015  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.898170  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.898234  108638 httplog.go:90] GET /healthz: (1.263365ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:11.899700  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.194818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.906424  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.906447  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.906483  108638 httplog.go:90] GET /healthz: (1.006771ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.918303  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.830571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.918683  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:30:11.937904  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.327287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.939617  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.189969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.958691  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.014607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.958909  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:30:11.978030  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.417944ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.979565  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.189595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.998796  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:11.998971  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:11.999110  108638 httplog.go:90] GET /healthz: (1.997898ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:11.999207  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.551181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:11.999387  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:30:12.006420  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:12.006449  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:12.006543  108638 httplog.go:90] GET /healthz: (1.123793ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.017862  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.242826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.019890  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.414442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.038704  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.091525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.039127  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:30:12.058324  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.676248ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.060252  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.399113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.066871  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.066889  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.068282  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.068412  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.069316  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.069677  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.072982  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.078716  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.03117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.078956  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:30:12.097914  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.306413ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.098049  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:12.098077  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:12.098101  108638 httplog.go:90] GET /healthz: (1.23501ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:12.099401  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.08399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.108861  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:12.109002  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:12.109153  108638 httplog.go:90] GET /healthz: (3.692719ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.118899  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.263467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.119121  108638 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:30:12.138944  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.47717ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.141022  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.454177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.158888  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.112334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.159114  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 11:30:12.179591  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (3.016659ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.181259  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.148907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.198019  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:12.198273  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:12.198494  108638 httplog.go:90] GET /healthz: (1.58623ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:12.198560  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.963773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.198830  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:30:12.206463  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:12.206498  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:12.206566  108638 httplog.go:90] GET /healthz: (1.162914ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.218057  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.427127ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.219596  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.053514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.238880  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.251967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.239115  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:30:12.257941  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.3341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.259732  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.257309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.275480  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.562374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0919 11:30:12.277703  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.629574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0919 11:30:12.278549  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.024286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.278771  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:30:12.279481  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.133103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0919 11:30:12.297915  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:12.297945  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:12.297983  108638 httplog.go:90] GET /healthz: (1.106961ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:12.298091  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.450652ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.300063  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.451715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.306707  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:12.306841  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:12.306978  108638 httplog.go:90] GET /healthz: (1.587399ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.318442  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.86703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.318765  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:30:12.337935  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.250747ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.339667  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.251119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.359088  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.414401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.359426  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:30:12.377962  108638 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.311649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.380123  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.373625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.398239  108638 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:30:12.398377  108638 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:30:12.398477  108638 httplog.go:90] GET /healthz: (1.532787ms) 0 [Go-http-client/1.1 127.0.0.1:48524]
I0919 11:30:12.398717  108638 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.13889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.398982  108638 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:30:12.406386  108638 httplog.go:90] GET /healthz: (934.679µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.407781  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.04385ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.409786  108638 httplog.go:90] POST /api/v1/namespaces: (1.533743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.410980  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (883.824µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.414528  108638 httplog.go:90] POST /api/v1/namespaces/default/services: (3.142428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.415660  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (751.631µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.417172  108638 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.15761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.482396  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.482406  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.483169  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.484294  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.484530  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.484781  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.498025  108638 httplog.go:90] GET /healthz: (956.837µs) 200 [Go-http-client/1.1 127.0.0.1:48522]
W0919 11:30:12.499611  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499694  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499749  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499768  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499804  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499821  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499834  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499847  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499860  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499876  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499886  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.499952  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:30:12.499983  108638 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 11:30:12.499994  108638 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 11:30:12.500230  108638 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 11:30:12.500564  108638 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 11:30:12.500589  108638 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 11:30:12.501534  108638 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (611.597µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48522]
I0919 11:30:12.502508  108638 get.go:251] Starting watch for /api/v1/pods, rv=59519 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m26s
I0919 11:30:12.550269  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.550307  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.550696  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.551147  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.552320  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.552565  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.600390  108638 shared_informer.go:227] caches populated
I0919 11:30:12.600427  108638 shared_informer.go:204] Caches are synced for scheduler 
I0919 11:30:12.600763  108638 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.600789  108638 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.600938  108638 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601092  108638 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.600938  108638 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601298  108638 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601102  108638 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601362  108638 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601315  108638 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601387  108638 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.600977  108638 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601503  108638 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601024  108638 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601616  108638 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601207  108638 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601659  108638 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601236  108638 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601722  108638 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601245  108638 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.601879  108638 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.602607  108638 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (481.179µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48536]
I0919 11:30:12.602663  108638 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (587.218µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48562]
I0919 11:30:12.602710  108638 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (417.311µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48538]
I0919 11:30:12.602741  108638 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (346.285µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48558]
I0919 11:30:12.602883  108638 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (390.567µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48556]
I0919 11:30:12.603206  108638 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (439.061µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48560]
I0919 11:30:12.603321  108638 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59519 labels= fields= timeout=6m35s
I0919 11:30:12.603453  108638 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59519 labels= fields= timeout=9m33s
I0919 11:30:12.603504  108638 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (1.445444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48542]
I0919 11:30:12.603586  108638 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59519 labels= fields= timeout=7m46s
I0919 11:30:12.603637  108638 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59519 labels= fields= timeout=6m4s
I0919 11:30:12.603830  108638 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59519 labels= fields= timeout=5m18s
I0919 11:30:12.603870  108638 get.go:251] Starting watch for /api/v1/services, rv=59633 labels= fields= timeout=7m18s
I0919 11:30:12.604068  108638 get.go:251] Starting watch for /api/v1/nodes, rv=59519 labels= fields= timeout=5m20s
I0919 11:30:12.604246  108638 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (2.134247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48540]
I0919 11:30:12.604258  108638 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (294.086µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48524]
I0919 11:30:12.604246  108638 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (2.040331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48564]
I0919 11:30:12.604853  108638 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59519 labels= fields= timeout=6m59s
I0919 11:30:12.604969  108638 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59519 labels= fields= timeout=8m59s
I0919 11:30:12.605259  108638 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59519 labels= fields= timeout=5m52s
I0919 11:30:12.691724  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.700842  108638 shared_informer.go:227] caches populated
I0919 11:30:12.700983  108638 shared_informer.go:227] caches populated
I0919 11:30:12.701128  108638 shared_informer.go:227] caches populated
I0919 11:30:12.701168  108638 shared_informer.go:227] caches populated
I0919 11:30:12.701193  108638 shared_informer.go:227] caches populated
I0919 11:30:12.701214  108638 shared_informer.go:227] caches populated
I0919 11:30:12.701247  108638 shared_informer.go:227] caches populated
I0919 11:30:12.701283  108638 shared_informer.go:227] caches populated
I0919 11:30:12.701313  108638 shared_informer.go:227] caches populated
I0919 11:30:12.701338  108638 shared_informer.go:227] caches populated
I0919 11:30:12.701377  108638 shared_informer.go:227] caches populated
I0919 11:30:12.704418  108638 httplog.go:90] POST /api/v1/namespaces: (2.231376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48576]
I0919 11:30:12.704853  108638 node_lifecycle_controller.go:327] Sending events to api server.
I0919 11:30:12.704911  108638 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0919 11:30:12.704927  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:30:12.704998  108638 taint_manager.go:162] Sending events to api server.
I0919 11:30:12.705084  108638 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0919 11:30:12.705113  108638 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0919 11:30:12.705172  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:30:12.705202  108638 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:30:12.705226  108638 node_lifecycle_controller.go:488] Starting node controller
I0919 11:30:12.705362  108638 shared_informer.go:197] Waiting for caches to sync for taint
I0919 11:30:12.705394  108638 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.705467  108638 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.706230  108638 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (488.25µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48576]
I0919 11:30:12.707074  108638 get.go:251] Starting watch for /api/v1/namespaces, rv=59635 labels= fields= timeout=8m54s
I0919 11:30:12.755872  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:12.805392  108638 shared_informer.go:227] caches populated
I0919 11:30:12.805447  108638 shared_informer.go:227] caches populated
I0919 11:30:12.805486  108638 shared_informer.go:227] caches populated
I0919 11:30:12.805493  108638 shared_informer.go:227] caches populated
I0919 11:30:12.805498  108638 shared_informer.go:227] caches populated
I0919 11:30:12.805503  108638 shared_informer.go:227] caches populated
I0919 11:30:12.805770  108638 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.805794  108638 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.805802  108638 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.805821  108638 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.805828  108638 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.805840  108638 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0919 11:30:12.806905  108638 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (489.019µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48582]
I0919 11:30:12.806907  108638 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (476.355µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48578]
I0919 11:30:12.806995  108638 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (580.121µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48580]
I0919 11:30:12.807421  108638 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59519 labels= fields= timeout=9m39s
I0919 11:30:12.807668  108638 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59519 labels= fields= timeout=5m36s
I0919 11:30:12.807882  108638 get.go:251] Starting watch for /api/v1/pods, rv=59519 labels= fields= timeout=8m42s
I0919 11:30:12.867679  108638 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0919 11:30:12.867712  108638 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0919 11:30:12.867732  108638 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0919 11:30:12.867738  108638 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0919 11:30:12.867750  108638 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0919 11:30:12.867756  108638 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0919 11:30:12.868067  108638 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"3afc897f-3a8a-4278-a86e-a4bd7bda31c4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0919 11:30:12.868089  108638 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"890f905f-7836-4ecb-bd5e-3d38389de55b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0919 11:30:12.868101  108638 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"d75a477e-e889-4d59-8670-a8ac28b5e766", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0919 11:30:12.870514  108638 httplog.go:90] POST /api/v1/namespaces/default/events: (2.334938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:12.872556  108638 httplog.go:90] POST /api/v1/namespaces/default/events: (1.589196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:12.874499  108638 httplog.go:90] POST /api/v1/namespaces/default/events: (1.359205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:12.905717  108638 shared_informer.go:227] caches populated
I0919 11:30:12.905745  108638 shared_informer.go:227] caches populated
I0919 11:30:12.905755  108638 shared_informer.go:227] caches populated
I0919 11:30:12.905762  108638 shared_informer.go:204] Caches are synced for taint 
I0919 11:30:12.905833  108638 taint_manager.go:186] Starting NoExecuteTaintManager
I0919 11:30:12.905765  108638 shared_informer.go:227] caches populated
I0919 11:30:12.905856  108638 shared_informer.go:227] caches populated
I0919 11:30:12.905867  108638 shared_informer.go:227] caches populated
I0919 11:30:12.905872  108638 shared_informer.go:227] caches populated
I0919 11:30:12.905878  108638 shared_informer.go:227] caches populated
I0919 11:30:12.905883  108638 shared_informer.go:227] caches populated
I0919 11:30:12.909122  108638 httplog.go:90] POST /api/v1/nodes: (2.595132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:12.909510  108638 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0919 11:30:12.909556  108638 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 11:30:12.909689  108638 taint_manager.go:438] Updating known taints on node node-0: []
I0919 11:30:12.911722  108638 httplog.go:90] POST /api/v1/nodes: (2.145591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:12.911818  108638 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0919 11:30:12.911872  108638 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:30:12.911890  108638 taint_manager.go:438] Updating known taints on node node-1: []
I0919 11:30:12.913759  108638 httplog.go:90] POST /api/v1/nodes: (1.422912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:12.913960  108638 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0919 11:30:12.913966  108638 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 11:30:12.914084  108638 taint_manager.go:438] Updating known taints on node node-2: []
I0919 11:30:12.916500  108638 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/pods: (1.857023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:12.916906  108638 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f", Name:"testpod-2"}
I0919 11:30:12.916932  108638 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2
I0919 11:30:12.916945  108638 scheduler.go:530] Attempting to schedule pod: taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2
I0919 11:30:12.917178  108638 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2", node "node-1"
I0919 11:30:12.917195  108638 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2", node "node-1": all PVCs bound and nothing to do
I0919 11:30:12.917265  108638 factory.go:606] Attempting to bind testpod-2 to node-1
I0919 11:30:12.919280  108638 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/pods/testpod-2/binding: (1.781111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:12.919554  108638 scheduler.go:662] pod taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2 is bound successfully on node "node-1", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0919 11:30:12.919761  108638 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f", Name:"testpod-2"}
I0919 11:30:12.921746  108638 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/events: (1.817312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.018992  108638 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/pods/testpod-2: (1.7005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.020996  108638 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/pods/testpod-2: (1.504266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.022784  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.200908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.028769  108638 httplog.go:90] PUT /api/v1/nodes/node-1/status: (5.474491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.029990  108638 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (436.224µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.032957  108638 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.235264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.033343  108638 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:30:13.028985886 +0000 UTC m=+328.211428352,}] Taint to Node node-1
I0919 11:30:13.033386  108638 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0919 11:30:13.067026  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.067134  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.068475  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.068588  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.069453  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.069831  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.073161  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.131927  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.96295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.231830  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.845511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.331559  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.642116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.431400  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.495719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.482721  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.482722  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.483451  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.484492  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.484811  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.484994  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.531825  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.877423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.550416  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.550604  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.551902  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.551925  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.552487  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.552724  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.603198  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.603313  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.603873  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.603944  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.604753  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.604761  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.631898  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.912504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.691879  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.731589  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.618387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.756071  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.807421  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:13.831813  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.836819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:13.931680  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.667144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.031618  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.661466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.067225  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.067338  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.068602  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.068753  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.069618  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.070024  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.073358  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.131899  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.918197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.231464  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.543232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.331757  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.793086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.431811  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.870444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.483021  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.483030  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.483623  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.484710  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.484978  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.485199  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.531582  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.64006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.550741  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.550827  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.552043  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.552072  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.552713  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.553025  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.603388  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.603495  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.604030  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.604097  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.604903  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.604913  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.632381  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.536167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.692077  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.732401  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.256679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.756262  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.807544  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:14.831582  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.566981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:14.931616  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.578173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.032224  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.252946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.067448  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.067546  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.068842  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.068872  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.069845  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.070206  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.073556  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.132958  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.905952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.233520  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.668487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.332034  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.048942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.431659  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.704889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.483265  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.483308  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.483731  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.484872  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.485232  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.485434  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.532086  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.046966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.551087  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.551178  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.552164  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.552178  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.552979  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.553270  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.603700  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.603832  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.604307  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.604462  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.605148  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.605153  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.631863  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.794638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.692290  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.731505  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.594806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.756395  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.807715  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:15.831707  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.736914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.931621  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.611806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:15.958775  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.592396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36758]
I0919 11:30:15.960830  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.552153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36758]
I0919 11:30:15.962802  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.471147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36758]
I0919 11:30:16.031766  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.801078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:16.067696  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.067714  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.069070  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.069074  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.069991  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.070985  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.074228  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.131311  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.347755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:16.231682  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.644196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:16.331560  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.651325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:16.431708  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.764259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:16.483474  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.483521  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.483875  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.485041  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.485443  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.485677  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.531704  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.695501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:16.551273  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.551280  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.552421  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.552435  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.553246  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.553493  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.603893  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.604010  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.604471  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.604652  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.605278  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.605284  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.631873  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.816959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:16.692463  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.731548  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.618606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:16.756573  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.807934  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:16.831630  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.671604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:16.931573  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.598509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.031389  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.489262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.067878  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.067908  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.069162  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.069214  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.070143  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.071126  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.074378  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.131528  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.586585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.232444  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.2124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.331456  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.487892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.349272  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.546661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:17.351006  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.304146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:17.352520  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.126451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:17.431625  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.697784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.483691  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.483697  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.484026  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.485224  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.485673  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.485906  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.531932  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.942581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.551479  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.551479  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.552582  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.552592  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.553350  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.553688  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.604111  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.604196  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.604661  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.604815  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.605426  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.605429  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.631455  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.498688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.692809  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.731661  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.676877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.756936  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.808234  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:17.831558  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.642908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.905990  108638 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0919 11:30:17.906025  108638 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0919 11:30:17.906089  108638 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0919 11:30:17.906106  108638 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0919 11:30:17.906111  108638 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0919 11:30:17.906121  108638 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0919 11:30:17.906125  108638 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0919 11:30:17.906153  108638 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0919 11:30:17.906194  108638 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
I0919 11:30:17.906216  108638 node_lifecycle_controller.go:770] Node node-1 is NotReady as of 2019-09-19 11:30:17.906204238 +0000 UTC m=+333.088646678. Adding it to the Taint queue.
W0919 11:30:17.906239  108638 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0919 11:30:17.906262  108638 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0919 11:30:17.906489  108638 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"2ca72d14-80b3-454f-a5d9-ddd609700e8b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0919 11:30:17.906530  108638 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"df512815-a70c-4e63-b0a4-2d10a13b5a3e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0919 11:30:17.906545  108638 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"521368a8-46cc-43ef-80df-2bdad85fe22c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0919 11:30:17.908721  108638 httplog.go:90] POST /api/v1/namespaces/default/events: (2.048339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.910865  108638 httplog.go:90] POST /api/v1/namespaces/default/events: (1.688667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.912739  108638 httplog.go:90] POST /api/v1/namespaces/default/events: (1.461635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.913535  108638 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (399.753µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.916124  108638 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.931562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.916426  108638 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-19 11:30:17.912943589 +0000 UTC m=+333.095386311,}] Taint to Node node-1
I0919 11:30:17.916468  108638 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 11:30:17.916789  108638 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:30:17.916890  108638 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 11:30:17 +0000 UTC}]
I0919 11:30:17.917022  108638 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2 at 2019-09-19 11:30:17.917012148 +0000 UTC m=+333.099454611 to be fired at 2019-09-19 11:30:17.917012148 +0000 UTC m=+333.099454611
I0919 11:30:17.917133  108638 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2
I0919 11:30:17.917529  108638 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2
I0919 11:30:17.919375  108638 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/events: (1.65044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48588]
I0919 11:30:17.919522  108638 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/pods/testpod-2: (2.10722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:17.931466  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.571613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.031796  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.847909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.068047  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.068107  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.069330  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.069333  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.070283  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.071249  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.074555  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.131605  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.620916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.231760  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.815859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.331633  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.701438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.431741  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.796563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.483900  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.483912  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.484268  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.485354  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.485831  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.486053  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.531567  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.579494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.551701  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.551704  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.552731  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.552783  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.553494  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.553867  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.604362  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.604742  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.604839  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.604978  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.605585  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.605596  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.631505  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.572822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.693023  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.731665  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.676037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.757294  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.808423  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:18.831757  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.774562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:18.931699  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.742016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.031705  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.694475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.068495  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.068537  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.069462  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.069661  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.070530  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.071480  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.074886  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.131779  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.797955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.231741  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.752432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.331494  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.591319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.431389  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.461035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.484060  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.484062  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.484400  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.485548  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.485996  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.486264  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.531371  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.474854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.551876  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.551881  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.552937  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.552942  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.553633  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.554065  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.604554  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.604918  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.605007  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.605162  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.605716  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.605750  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.631444  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.524244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.693203  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.731673  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.735116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.757501  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.808765  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:19.831302  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.327217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:19.931397  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.436912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.031712  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.642664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.068692  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.068692  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.069736  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.069758  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.070679  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.071666  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.075273  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.131584  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.602089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.232896  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.660413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.331409  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.52402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.431673  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.706081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.484273  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.484274  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.484498  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.485757  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.486194  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.486446  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.531308  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.442207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.552153  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.552153  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.553054  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.553092  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.553805  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.554233  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.604742  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.605123  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.605239  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.605299  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.605886  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.605902  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.631532  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.576989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.693480  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.731466  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.542808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.757697  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.809006  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:20.831934  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.913505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:20.931438  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.594581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.031524  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.637605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.068829  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.068831  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.069845  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.069872  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.070824  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.071795  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.075451  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.131675  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.683936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.231805  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.704178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.331577  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.590467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.431229  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.324172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.484438  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.484572  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.484735  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.485979  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.486354  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.486604  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.531506  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.575805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.552350  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.552382  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.553208  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.553263  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.553943  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.554411  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.604937  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.605287  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.605407  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.605501  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.605983  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.606102  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.631462  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.534021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.693910  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.731455  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.575833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.757901  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.809178  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:21.831627  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.634499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:21.931576  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.545535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.031740  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.757701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.069142  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.069138  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.070000  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.070002  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.070961  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.071944  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.075676  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.131634  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.663776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.231934  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.997856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.276017  108638 httplog.go:90] GET /api/v1/namespaces/default: (2.014745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0919 11:30:22.277696  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.19026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0919 11:30:22.279214  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.019455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0919 11:30:22.331235  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.368237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.408524  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.477445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.410249  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.211382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.411854  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (911.657µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.431563  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.600794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.484593  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.484713  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.484870  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.486159  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.486499  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.486752  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.531531  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.611672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.552467  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.552715  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.553348  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.553462  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.554085  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.554604  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.605133  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.605453  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.605568  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.605670  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.606175  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.606269  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.631297  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.389149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.694115  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.731571  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.644332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.758075  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.809396  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:22.831544  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.623995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.906470  108638 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000299611s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 11:30:22.906516  108638 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0919 11:30:22.906526  108638 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0919 11:30:22.906532  108638 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0919 11:30:22.909382  108638 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.411034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.909716  108638 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0919 11:30:22.909750  108638 controller_utils.go:124] Update ready status of pods on node [node-0]
I0919 11:30:22.909883  108638 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"521368a8-46cc-43ef-80df-2bdad85fe22c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0919 11:30:22.910475  108638 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (414.961µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48588]
I0919 11:30:22.911364  108638 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (1.42225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.911624  108638 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.005415261s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 11:30:22.911700  108638 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0919 11:30:22.911715  108638 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0919 11:30:22.911724  108638 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0919 11:30:22.911858  108638 httplog.go:90] POST /api/v1/namespaces/default/events: (1.422307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48810]
I0919 11:30:22.913826  108638 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.874587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.914077  108638 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.007829614s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 11:30:22.914108  108638 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0919 11:30:22.914117  108638 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0919 11:30:22.914123  108638 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0919 11:30:22.914799  108638 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (359.3µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48810]
I0919 11:30:22.915399  108638 httplog.go:90] PATCH /api/v1/nodes/node-0: (4.222856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48588]
I0919 11:30:22.915715  108638 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:30:22.909881963 +0000 UTC m=+338.092324462,}] Taint to Node node-0
I0919 11:30:22.915753  108638 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0919 11:30:22.916296  108638 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.898154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.916536  108638 controller_utils.go:180] Recording status change NodeNotReady event message for node node-2
I0919 11:30:22.916563  108638 controller_utils.go:124] Update ready status of pods on node [node-2]
I0919 11:30:22.916900  108638 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"2ca72d14-80b3-454f-a5d9-ddd609700e8b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-2 status is now: NodeNotReady
I0919 11:30:22.917059  108638 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (343.651µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.917731  108638 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-2: (963.806µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48810]
I0919 11:30:22.917927  108638 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0919 11:30:22.918316  108638 httplog.go:90] POST /api/v1/namespaces/default/events: (1.21872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:22.918812  108638 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (359.219µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48810]
I0919 11:30:22.919411  108638 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.494192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48588]
I0919 11:30:22.919672  108638 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:30:22.914257547 +0000 UTC m=+338.096700000,}] Taint to Node node-1
I0919 11:30:22.919948  108638 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.005313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.920149  108638 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (328.019µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48588]
I0919 11:30:22.920297  108638 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:30:22.916471391 +0000 UTC m=+338.098913859,}] Taint to Node node-2
I0919 11:30:22.920326  108638 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0919 11:30:22.923844  108638 store.go:362] GuaranteedUpdate of /fbbe6906-7e92-4457-97e8-a505a3086ac8/minions/node-1 failed because of a conflict, going to retry
I0919 11:30:22.924298  108638 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.15453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:22.925285  108638 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:30:22.925314  108638 taint_manager.go:438] Updating known taints on node node-1: []
I0919 11:30:22.925330  108638 taint_manager.go:459] All taints were removed from the Node node-1. Cancelling all evictions...
I0919 11:30:22.925340  108638 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2 at 2019-09-19 11:30:22.925337189 +0000 UTC m=+338.107779651
I0919 11:30:22.925914  108638 httplog.go:90] PATCH /api/v1/nodes/node-1: (4.90776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.926172  108638 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:30:13 +0000 UTC,}] Taint
I0919 11:30:22.926583  108638 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:30:22.926598  108638 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 11:30:17 +0000 UTC}]
I0919 11:30:22.926621  108638 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2 at 2019-09-19 11:30:22.926611752 +0000 UTC m=+338.109054216 to be fired at 2019-09-19 11:30:22.926611752 +0000 UTC m=+338.109054216
I0919 11:30:22.926722  108638 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2
I0919 11:30:22.926729  108638 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (352.255µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.926904  108638 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2
I0919 11:30:22.928225  108638 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/pods/testpod-2: (1.195192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:22.929078  108638 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.46956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48814]
I0919 11:30:22.929428  108638 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:30:22.926208725 +0000 UTC m=+338.108651230,}] Taint to Node node-1
I0919 11:30:22.929777  108638 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/events/testpod-2.15c5d3860daffb05: (1.929705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:22.930379  108638 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (440.067µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:22.930737  108638 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:30:13 +0000 UTC,}] Taint
I0919 11:30:22.930914  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.101623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48814]
I0919 11:30:23.031310  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.40812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:23.069328  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.069329  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.070183  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.070185  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.071143  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.072085  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.075851  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.131498  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.554801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:23.231853  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.874216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:23.331535  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.635543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:23.431372  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.504548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:23.484749  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.484818  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.484984  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.486301  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.486689  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.486966  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.531449  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.581831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:23.552634  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.552856  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.553434  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.553623  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.554232  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.554811  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.605310  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.605577  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.605698  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.605807  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.606305  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.606412  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.631604  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.593153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:23.694317  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.731583  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.63021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:23.758253  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.809608  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:23.831903  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.953607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:23.931567  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.616187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.031772  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.774266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.069603  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.069606  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.070393  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.070396  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.071378  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.072246  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.076083  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.131932  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.955917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.231484  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.520929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.332941  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.802361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.431867  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.906997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.484942  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.484969  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.485115  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.486458  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.486869  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.487203  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.531882  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.89847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.552817  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.553026  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.554372  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.555111  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.556719  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.556994  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.605475  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.605745  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.605859  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.606025  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.606446  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.606541  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.631885  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.802733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.694512  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.731969  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.001134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.758572  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.809809  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:24.831584  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.637591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:24.931772  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.814088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.032852  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.968831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.069807  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.069811  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.070552  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.070574  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.071545  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.072394  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.076259  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.131420  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.591007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.231729  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.767945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.331775  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.767311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.431854  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.866948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.485283  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.486134  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.486136  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.486681  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.487029  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.487368  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.531637  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.658626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.552995  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.553207  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.554618  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.555372  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.556893  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.557198  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.605730  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.605819  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.606027  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.606181  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.606610  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.606772  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.631811  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.69256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.694671  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.731846  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.888192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.758742  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.810013  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:25.831573  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.658719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.931603  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.670139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:25.958734  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.482443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36758]
I0919 11:30:25.960553  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.341267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36758]
I0919 11:30:25.962041  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.067451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36758]
I0919 11:30:26.031745  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.808112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:26.069963  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.069963  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.070686  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.070748  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.071725  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.072543  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.076431  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.132103  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.057624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:26.231525  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.555248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:26.331477  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.525464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:26.431685  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.777358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:26.485595  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.486332  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.486410  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.486845  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.487191  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.487533  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.531618  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.611145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:26.553350  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.553395  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.554862  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.555590  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.557064  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.557353  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.605996  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.606003  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.606349  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.606346  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.606707  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.606933  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.631856  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.898322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:26.694890  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.731700  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.730185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:26.759053  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.810289  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:26.831840  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.895392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:26.931543  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.636075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.031890  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.836207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.070185  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.070188  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.070883  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.070905  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.071970  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.072825  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.076624  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.131836  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.841117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.231834  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.792267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.331886  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.752671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.349548  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.621767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:27.351425  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.390112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:27.352937  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.086978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:27.431760  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.798668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.486009  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.486586  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.486602  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.487009  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.487399  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.487728  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.532007  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.878888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.553680  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.553693  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.555090  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.555779  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.557305  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.557558  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.606177  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.606194  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.606536  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.606736  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.606961  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.607281  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.631935  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.889036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.695122  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.731842  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.819737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.759389  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.810657  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:27.831844  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.890592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.924869  108638 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.018614421s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:30:27.924925  108638 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.018680896s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:27.924992  108638 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.018747609s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:27.925016  108638 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.018771134s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:27.925073  108638 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-19 11:30:27.925058617 +0000 UTC m=+343.107501085. Adding it to the Taint queue.
I0919 11:30:27.925102  108638 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.018943453s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:30:27.925115  108638 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.018956047s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:27.925125  108638 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.018966262s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:27.925134  108638 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.018975701s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:27.925157  108638 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-19 11:30:27.925147481 +0000 UTC m=+343.107589946. Adding it to the Taint queue.
I0919 11:30:27.925189  108638 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.018986493s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:30:27.925216  108638 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.019012465s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:27.925232  108638 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.019028591s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:27.925251  108638 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.019047915s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:27.926177  108638 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (595.045µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.930072  108638 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.853262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.930579  108638 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:30:27.930599  108638 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 11:30:17 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-19 11:30:27 +0000 UTC}]
I0919 11:30:27.930630  108638 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2 at 2019-09-19 11:30:27.930619532 +0000 UTC m=+343.113061992 to be fired at 2019-09-19 11:30:27.930619532 +0000 UTC m=+343.113061992
W0919 11:30:27.930698  108638 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2}. Skipping.
I0919 11:30:27.930716  108638 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-19 11:30:27.925280301 +0000 UTC m=+343.107722762,}] Taint to Node node-1
I0919 11:30:27.931189  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.348716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48586]
I0919 11:30:27.931443  108638 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (441.752µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.934389  108638 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.145018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:27.934617  108638 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 11:30:27.934969  108638 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:30:27.934990  108638 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/unreachable  NoExecute 2019-09-19 11:30:27 +0000 UTC}]
I0919 11:30:27.935055  108638 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2 at 2019-09-19 11:30:27.935008929 +0000 UTC m=+343.117451390 to be fired at 2019-09-19 11:35:27.935008929 +0000 UTC m=+643.117451390
W0919 11:30:27.935065  108638 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionscd2f85b2-e89c-4065-9465-7d4a0f120e0f/testpod-2}. Skipping.
I0919 11:30:28.031951  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.856803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:28.070583  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.070601  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.071242  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.071259  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.072313  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.073009  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.076946  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.132156  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.121971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:28.231883  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.881938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:28.331890  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.890978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:28.431872  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.944347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:28.486226  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.486791  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.486827  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.487244  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.487630  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.487981  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.531821  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.789674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:28.553885  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.553922  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.555347  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.555989  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.557595  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.557839  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.606372  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.606429  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.606730  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.607041  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.607156  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.607465  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.632016  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.827336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:28.695377  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.731873  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.838368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:28.759626  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.810829  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:28.832185  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.180972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:28.931794  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.790788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.031947  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.886922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.070791  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.070791  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.071398  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.071432  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.072559  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.073179  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.077141  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.132028  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.927721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.231879  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.83738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.331872  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.749294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.432043  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.046433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.486594  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.486999  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.487024  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.487525  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.487885  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.488198  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.531955  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.842365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.554276  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.554277  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.555576  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.556179  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.557838  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.558004  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.606598  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.606676  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.606863  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.607235  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.607367  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.607719  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.632030  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.964893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.695602  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.731842  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.799654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.759802  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.811237  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:29.831883  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.891518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:29.931877  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.819417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.031559  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.565677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.071017  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.071071  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.071635  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.071681  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.072812  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.073416  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.077341  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.131808  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.753079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.231872  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.753665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.331798  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.769463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.431847  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.822929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.486847  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.487147  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.487153  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.487698  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.488168  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.488417  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.532092  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.025923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.554599  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.554598  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.555830  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.556390  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.558154  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.558206  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.606765  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.606875  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.607132  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.607447  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.607614  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.607836  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.632157  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.15545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.695928  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.731953  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.993368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.760006  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.811513  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:30.831745  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.74564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:30.931768  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.768631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.031939  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.906271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.071233  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.071231  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.071973  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.071976  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.072995  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.073683  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.077624  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.131923  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.799636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.231747  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.794237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.331919  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.841272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.431576  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.662115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.487069  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.487361  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.487370  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.487951  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.488370  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.488673  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.531771  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.74963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.554729  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.554731  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.556012  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.556592  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.558462  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.558462  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.607083  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.607331  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.607352  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.607894  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.607897  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.608069  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.631960  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.967511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.696128  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.731540  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.593856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.760223  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.811773  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:31.831846  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.807628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:31.931834  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.778535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.031904  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.77885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.071397  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.071398  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.072194  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.072213  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.073207  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.074048  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.077836  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.132036  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.025131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.231754  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.65155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.275922  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.735931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0919 11:30:32.277599  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.223838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0919 11:30:32.279279  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.09758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33506]
I0919 11:30:32.332181  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.957344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.408831  108638 httplog.go:90] GET /api/v1/namespaces/default: (1.623813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.411082  108638 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.54334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.413127  108638 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.375434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.434156  108638 httplog.go:90] GET /api/v1/nodes/node-1: (3.967338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.487771  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.487776  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.487790  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.488147  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.488753  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.488964  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.532336  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.822064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.554930  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.554957  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.556302  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.556917  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.558657  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.558822  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.607340  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.607568  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.607569  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.608084  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.608084  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.608242  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.631993  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.927986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.696474  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.731759  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.730827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.760591  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.812135  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:32.832083  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.048675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.931926  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.911807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:32.934939  108638 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.028771204s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:30:32.934985  108638 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.028826192s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:32.935000  108638 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.028841443s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:32.935012  108638 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.028854074s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:32.935070  108638 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.028867697s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:30:32.935082  108638 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.028879502s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:32.935093  108638 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.028888927s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:32.935103  108638 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.028901026s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:32.935130  108638 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-19 11:30:32.935116994 +0000 UTC m=+348.117559466. Adding it to the Taint queue.
I0919 11:30:32.935151  108638 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.028907751s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:30:32.935167  108638 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.028923511s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:32.935193  108638 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.028944435s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:32.935203  108638 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.028960494s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:30:12 +0000 UTC,LastTransitionTime:2019-09-19 11:30:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:30:33.032039  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.925775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:33.071673  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.071676  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.072365  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.072477  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.073433  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.074193  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.078017  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.131898  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.890808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:33.231787  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.795108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:33.331694  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.729525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:33.431936  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.941015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:33.488139  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.488154  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.488187  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.488402  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.489018  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.489248  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.531702  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.695911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:33.555111  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.555159  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.556540  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.557133  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.558963  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.558991  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.607624  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.607808  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.607993  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.608271  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.608303  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.608443  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.631895  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.911348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:33.696893  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.731918  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.882503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:33.760729  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.812455  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:33.832238  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.069806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:33.931770  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.758403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.031687  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.702679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.055600  108638 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.568501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:34.057266  108638 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.222059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:34.058590  108638 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (982.753µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45288]
I0919 11:30:34.072131  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.072171  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.072482  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.072709  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.073612  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.074337  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.078416  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.131767  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.788273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.231589  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.570368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.331931  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.842052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.431518  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.509041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.488365  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.488371  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.488373  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.488555  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.489211  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.489454  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.531909  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.855205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.555308  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.555554  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.556714  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.557330  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.559119  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.559155  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.607893  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.607890  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.608439  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.608701  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.609970  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.609985  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.631981  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.872831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.697087  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.732742  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.213153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.760890  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.812722  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:34.833314  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.770022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:34.932516  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.424081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:35.031606  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.611906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:35.072320  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.072318  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.072556  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.072959  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.073824  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.074624  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.078609  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.132300  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.238736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:35.232249  108638 httplog.go:90] GET /api/v1/nodes/node-1: (2.045903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:35.332212  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.64622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:35.431969  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.86649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:35.488778  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.488899  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.488924  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.489079  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.489447  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.489749  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.531701  108638 httplog.go:90] GET /api/v1/nodes/node-1: (1.715125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48812]
I0919 11:30:35.555744  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.555753  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.556965  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.557617  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.559279  108638 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:30:35.559311  108638 reflector.go:236] k8s.io/client-go/informers/facto