This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 8 failed / 2860 succeeded
Started2019-09-19 09:30
Elapsed29m35s
Revision
Buildergke-prow-ssd-pool-1a225945-hwdp
Refs master:b8866250
82703:f642bc2f
podf9134826-dabf-11e9-b7bb-32cecfce85d6
infra-commitfe9f237a8
podf9134826-dabf-11e9-b7bb-32cecfce85d6
repok8s.io/kubernetes
repo-commite9a75e611f9c04d4b4e217c83e20674593d8f0e4
repos{u'k8s.io/kubernetes': u'master:b88662505d288297750becf968bf307dacf872fa,82703:f642bc2feb755cb6f834787163725a498cda4dce'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0919 09:53:39.039614  108095 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 09:53:39.039635  108095 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 09:53:39.039649  108095 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 09:53:39.039660  108095 master.go:259] Using reconciler: 
I0919 09:53:39.042007  108095 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.042170  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.042193  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.043713  108095 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 09:53:39.043760  108095 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 09:53:39.043759  108095 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.044128  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.044157  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.045886  108095 watch_cache.go:405] Replace watchCache (rev: 30486) 
I0919 09:53:39.047361  108095 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:53:39.047408  108095 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.047401  108095 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:53:39.047534  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.047557  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.049641  108095 watch_cache.go:405] Replace watchCache (rev: 30490) 
I0919 09:53:39.050143  108095 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 09:53:39.050196  108095 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.050328  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.050344  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.050371  108095 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 09:53:39.051266  108095 watch_cache.go:405] Replace watchCache (rev: 30490) 
I0919 09:53:39.051777  108095 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 09:53:39.051874  108095 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 09:53:39.052040  108095 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.052145  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.052160  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.053021  108095 watch_cache.go:405] Replace watchCache (rev: 30491) 
I0919 09:53:39.053618  108095 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 09:53:39.053686  108095 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 09:53:39.054307  108095 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.054420  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.054440  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.054833  108095 watch_cache.go:405] Replace watchCache (rev: 30491) 
I0919 09:53:39.055284  108095 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 09:53:39.055426  108095 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 09:53:39.055463  108095 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.055580  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.055612  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.056632  108095 watch_cache.go:405] Replace watchCache (rev: 30491) 
I0919 09:53:39.057406  108095 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 09:53:39.057504  108095 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 09:53:39.057908  108095 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.058019  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.058033  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.059450  108095 watch_cache.go:405] Replace watchCache (rev: 30491) 
I0919 09:53:39.060332  108095 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 09:53:39.060429  108095 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 09:53:39.060524  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.060723  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.060755  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.061491  108095 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 09:53:39.061515  108095 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 09:53:39.061674  108095 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.061819  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.061841  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.062814  108095 watch_cache.go:405] Replace watchCache (rev: 30492) 
I0919 09:53:39.062819  108095 watch_cache.go:405] Replace watchCache (rev: 30492) 
I0919 09:53:39.062986  108095 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 09:53:39.063154  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.063190  108095 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 09:53:39.063246  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.063324  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.064289  108095 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 09:53:39.064383  108095 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 09:53:39.064510  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.064614  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.064631  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.065984  108095 watch_cache.go:405] Replace watchCache (rev: 30492) 
I0919 09:53:39.066492  108095 watch_cache.go:405] Replace watchCache (rev: 30492) 
I0919 09:53:39.067322  108095 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 09:53:39.067502  108095 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.067688  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.067723  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.067724  108095 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 09:53:39.069353  108095 watch_cache.go:405] Replace watchCache (rev: 30493) 
I0919 09:53:39.070638  108095 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 09:53:39.070686  108095 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 09:53:39.071047  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.071239  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.071258  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.072186  108095 watch_cache.go:405] Replace watchCache (rev: 30494) 
I0919 09:53:39.072406  108095 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 09:53:39.072638  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.072493  108095 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 09:53:39.073892  108095 watch_cache.go:405] Replace watchCache (rev: 30494) 
I0919 09:53:39.074634  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.074737  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.075660  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.075687  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.076487  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.076703  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.076723  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.077511  108095 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 09:53:39.077528  108095 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 09:53:39.077663  108095 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 09:53:39.077911  108095 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.078185  108095 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.079227  108095 watch_cache.go:405] Replace watchCache (rev: 30495) 
I0919 09:53:39.079312  108095 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.080287  108095 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.081091  108095 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.082289  108095 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.082951  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.083145  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.083362  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.084276  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.084976  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.085251  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.085997  108095 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.086680  108095 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.087256  108095 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.087476  108095 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.088181  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.088412  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.088514  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.088598  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.088996  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.089137  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.089255  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.089981  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.090284  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.091368  108095 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.092105  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.092481  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.092958  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.093586  108095 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.093881  108095 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.094447  108095 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.095694  108095 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.096277  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.097314  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.097564  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.097673  108095 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 09:53:39.097692  108095 master.go:461] Enabling API group "authentication.k8s.io".
I0919 09:53:39.097709  108095 master.go:461] Enabling API group "authorization.k8s.io".
I0919 09:53:39.097882  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.098135  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.098164  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.099399  108095 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:53:39.099493  108095 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:53:39.099623  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.099800  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.099833  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.100760  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.100800  108095 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:53:39.100889  108095 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:53:39.101027  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.101206  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.101236  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.102249  108095 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:53:39.102313  108095 master.go:461] Enabling API group "autoscaling".
I0919 09:53:39.102314  108095 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:53:39.102522  108095 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.102631  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.102646  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.104254  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.104445  108095 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 09:53:39.104536  108095 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 09:53:39.104736  108095 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.104882  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.104913  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.105896  108095 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 09:53:39.105921  108095 master.go:461] Enabling API group "batch".
I0919 09:53:39.105931  108095 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 09:53:39.105924  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.106903  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.107209  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.107288  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.107693  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.107798  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.109048  108095 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 09:53:39.109077  108095 master.go:461] Enabling API group "certificates.k8s.io".
I0919 09:53:39.109260  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.109287  108095 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 09:53:39.109412  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.109440  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.110355  108095 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:53:39.110541  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.110609  108095 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:53:39.110657  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.110688  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.111444  108095 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:53:39.111468  108095 master.go:461] Enabling API group "coordination.k8s.io".
I0919 09:53:39.111485  108095 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 09:53:39.111536  108095 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:53:39.111576  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.111705  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.111915  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.111980  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.112077  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.113217  108095 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:53:39.113247  108095 master.go:461] Enabling API group "extensions".
I0919 09:53:39.113285  108095 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:53:39.113523  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.113478  108095 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.113713  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.113738  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.115174  108095 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 09:53:39.115195  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.115395  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.115520  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.115550  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.115549  108095 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 09:53:39.116708  108095 watch_cache.go:405] Replace watchCache (rev: 30500) 
I0919 09:53:39.116756  108095 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:53:39.116779  108095 master.go:461] Enabling API group "networking.k8s.io".
I0919 09:53:39.116809  108095 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.116838  108095 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:53:39.116986  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.117002  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.117954  108095 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 09:53:39.117980  108095 master.go:461] Enabling API group "node.k8s.io".
I0919 09:53:39.118074  108095 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 09:53:39.118131  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.118186  108095 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.118322  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.118340  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.119539  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.121403  108095 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 09:53:39.121436  108095 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 09:53:39.121632  108095 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.121911  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.121951  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.122708  108095 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 09:53:39.122847  108095 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 09:53:39.123161  108095 master.go:461] Enabling API group "policy".
I0919 09:53:39.123252  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.123421  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.123451  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.124156  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.124469  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.125933  108095 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:53:39.126001  108095 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:53:39.126179  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.126324  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.126349  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.128918  108095 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:53:39.128976  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.129089  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.129102  108095 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:53:39.129116  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.129834  108095 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:53:39.130053  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.130162  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.130174  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.130199  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.130295  108095 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:53:39.131100  108095 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:53:39.131141  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.131211  108095 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:53:39.131309  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.131330  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.132260  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.132637  108095 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:53:39.132822  108095 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:53:39.132789  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.133177  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.133193  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.136247  108095 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:53:39.136276  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.136286  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.136313  108095 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:53:39.128928  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.136414  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.136430  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.136544  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.137682  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.138023  108095 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:53:39.138364  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.138594  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.138423  108095 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:53:39.138729  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.140565  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.141563  108095 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:53:39.141722  108095 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 09:53:39.142345  108095 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:53:39.144554  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.146053  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.146362  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.146477  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.147769  108095 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:53:39.148005  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.148145  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.148187  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.148251  108095 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:53:39.149490  108095 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:53:39.149548  108095 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:53:39.149601  108095 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 09:53:39.149906  108095 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 09:53:39.150265  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.150609  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.150725  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.151022  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.152099  108095 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:53:39.152491  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.153247  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.152186  108095 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:53:39.152209  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.154174  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.154626  108095 watch_cache.go:405] Replace watchCache (rev: 30501) 
I0919 09:53:39.157215  108095 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:53:39.157616  108095 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.157472  108095 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:53:39.159103  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.159136  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.159660  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.160508  108095 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 09:53:39.160538  108095 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 09:53:39.160544  108095 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.160691  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.160707  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.161672  108095 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 09:53:39.162011  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.162052  108095 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 09:53:39.162607  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.162631  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.163361  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.163484  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.164460  108095 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:53:39.164643  108095 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:53:39.164805  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.164979  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.165000  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.165420  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.167146  108095 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:53:39.167258  108095 master.go:461] Enabling API group "storage.k8s.io".
I0919 09:53:39.167266  108095 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:53:39.167490  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.167784  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.167800  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.168734  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.170324  108095 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 09:53:39.170508  108095 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 09:53:39.170570  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.170924  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.171010  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.172258  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.172325  108095 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 09:53:39.172441  108095 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 09:53:39.172590  108095 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.172743  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.172794  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.174172  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.174859  108095 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 09:53:39.174914  108095 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 09:53:39.175141  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.175638  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.175671  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.175783  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.176598  108095 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 09:53:39.176711  108095 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 09:53:39.176815  108095 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.176991  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.177016  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.177343  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.178279  108095 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 09:53:39.178303  108095 master.go:461] Enabling API group "apps".
I0919 09:53:39.178327  108095 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 09:53:39.178341  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.178685  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.178835  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.178878  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.180071  108095 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:53:39.180106  108095 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:53:39.180126  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.180244  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.180266  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.180675  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.181271  108095 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:53:39.181382  108095 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:53:39.181518  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.181864  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.181890  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.182564  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.182629  108095 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:53:39.182647  108095 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:53:39.182980  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.183337  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.183457  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.183905  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.184347  108095 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:53:39.184434  108095 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 09:53:39.184518  108095 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.184434  108095 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:53:39.185035  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:39.185129  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:39.185146  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.186053  108095 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:53:39.186076  108095 master.go:461] Enabling API group "events.k8s.io".
I0919 09:53:39.186098  108095 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:53:39.186380  108095 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.186696  108095 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.186876  108095 watch_cache.go:405] Replace watchCache (rev: 30502) 
I0919 09:53:39.187322  108095 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
E0919 09:53:39.187518  108095 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:35645/apis/events.k8s.io/v1beta1/namespaces/permit-plugin5052de6b-1963-44f6-970a-8702b6b1a0b9/events: dial tcp 127.0.0.1:35645: connect: connection refused' (may retry after sleeping)
I0919 09:53:39.187781  108095 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.188356  108095 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.188546  108095 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.188844  108095 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.189061  108095 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.189316  108095 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.189518  108095 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.190832  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.191253  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.192537  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.192905  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.194139  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.194459  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.197009  108095 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.197340  108095 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.198113  108095 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.198388  108095 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:53:39.198444  108095 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 09:53:39.199266  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.199431  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.199630  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.200528  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.201543  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.202427  108095 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.203001  108095 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.203781  108095 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.204521  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.204984  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.205550  108095 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:53:39.205620  108095 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 09:53:39.206339  108095 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.206849  108095 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.207453  108095 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.208011  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.208583  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.209364  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.209879  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.210575  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.211061  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.211616  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.212390  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:53:39.212448  108095 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 09:53:39.213059  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.213652  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:53:39.213726  108095 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 09:53:39.214414  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.215055  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.215381  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.215882  108095 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.216591  108095 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.217088  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.217806  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:53:39.217881  108095 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 09:53:39.218985  108095 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.219607  108095 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.219931  108095 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.220774  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.221012  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.221220  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.221980  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.222185  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.222385  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.223035  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.223237  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.223729  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:53:39.223808  108095 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 09:53:39.223823  108095 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 09:53:39.224650  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.225239  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.226118  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.226629  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.227461  108095 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1e67e1bc-1c3a-48f6-96b3-ffee8d1ea0c2", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:53:39.230714  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.230747  108095 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 09:53:39.230760  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.230772  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.230790  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.230799  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.230845  108095 httplog.go:90] GET /healthz: (364.452µs) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:39.232298  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.76705ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.237568  108095 httplog.go:90] GET /api/v1/services: (3.122354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.241972  108095 httplog.go:90] GET /api/v1/services: (1.482984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.244442  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.244481  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.244493  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.244502  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.244514  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.244539  108095 httplog.go:90] GET /healthz: (207.61µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:39.245849  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.760474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.246200  108095 httplog.go:90] GET /api/v1/services: (1.277423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:39.248418  108095 httplog.go:90] POST /api/v1/namespaces: (2.17597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.249457  108095 httplog.go:90] GET /api/v1/services: (884.217µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:39.249966  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (950.631µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.251924  108095 httplog.go:90] POST /api/v1/namespaces: (1.422095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.253596  108095 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (925.058µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.256145  108095 httplog.go:90] POST /api/v1/namespaces: (1.883964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.331792  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.331833  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.331846  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.331857  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.331871  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.331912  108095 httplog.go:90] GET /healthz: (309.251µs) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:39.346704  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.346743  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.346757  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.346767  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.346775  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.346886  108095 httplog.go:90] GET /healthz: (426.291µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.431717  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.431770  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.431782  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.431792  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.431819  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.431863  108095 httplog.go:90] GET /healthz: (298.709µs) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:39.446587  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.446712  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.446795  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.446879  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.446988  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.447202  108095 httplog.go:90] GET /healthz: (765.967µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.531763  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.532026  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.532143  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.532223  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.532289  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.532470  108095 httplog.go:90] GET /healthz: (874.041µs) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:39.546548  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.546581  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.546590  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.546596  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.546603  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.546630  108095 httplog.go:90] GET /healthz: (215.356µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.631696  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.631732  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.631744  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.631755  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.631769  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.631798  108095 httplog.go:90] GET /healthz: (250.166µs) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:39.646591  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.646628  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.646649  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.646659  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.646666  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.646728  108095 httplog.go:90] GET /healthz: (280.107µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.731680  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.731719  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.731732  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.731742  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.731750  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.731803  108095 httplog.go:90] GET /healthz: (279.41µs) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:39.746578  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.746620  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.746633  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.746642  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.746649  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.746698  108095 httplog.go:90] GET /healthz: (266.733µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.831719  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.838272  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.838303  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.838313  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.838323  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.838376  108095 httplog.go:90] GET /healthz: (6.804238ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:39.846543  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.846600  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.846613  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.846623  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.846632  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.846685  108095 httplog.go:90] GET /healthz: (280.108µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:39.932058  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.932098  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.932114  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.932129  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.932145  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.932177  108095 httplog.go:90] GET /healthz: (289.25µs) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:39.946566  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:39.946598  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:39.946611  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:39.946620  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:39.946628  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:39.946676  108095 httplog.go:90] GET /healthz: (237.476µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.031791  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:53:40.031828  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.031840  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:40.031850  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:40.031857  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:40.031905  108095 httplog.go:90] GET /healthz: (284.428µs) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:40.039612  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:53:40.039710  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:53:40.047549  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.047585  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:40.047628  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:40.047637  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:40.047693  108095 httplog.go:90] GET /healthz: (1.283194ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.132564  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.132598  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:40.132609  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:40.132627  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:40.132664  108095 httplog.go:90] GET /healthz: (1.045469ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:40.147410  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.147444  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:40.147455  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:40.147464  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:40.147506  108095 httplog.go:90] GET /healthz: (1.088498ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.232103  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.45825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.233543  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.233573  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:53:40.233583  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:53:40.233591  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:53:40.233637  108095 httplog.go:90] GET /healthz: (1.451359ms) 0 [Go-http-client/1.1 127.0.0.1:45496]
I0919 09:53:40.233686  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (766.064µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45498]
I0919 09:53:40.233846  108095 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.351492ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.235756  108095 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (5.128602ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.235759  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.567696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45498]
I0919 09:53:40.236232  108095 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.962079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.237574  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.438215ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45498]
I0919 09:53:40.237716  108095 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.56145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.237884  108095 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 09:53:40.238838  108095 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (726.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.238929  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (749.441µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45498]
I0919 09:53:40.240297  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (971.477µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.240396  108095 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.156213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.240646  108095 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 09:53:40.240669  108095 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 09:53:40.241665  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.067193ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.242767  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (750.323µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.243810  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (697.595µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.244847  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (688.81µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.245894  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (690.571µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.247112  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.247141  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.247176  108095 httplog.go:90] GET /healthz: (851.689µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.249091  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.778545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.249288  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 09:53:40.250481  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (923.273µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.252080  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.334887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.252232  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 09:53:40.253197  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (784.011µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.254782  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.150738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.255032  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 09:53:40.256536  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.252887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.258249  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.271831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.258755  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:53:40.259613  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (701.077µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.261812  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.714393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.261975  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 09:53:40.263276  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (798.001µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.265215  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.497486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.265504  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 09:53:40.266473  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (727.105µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.268221  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.118294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.268717  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 09:53:40.269686  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (661.083µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.271265  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.248997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.271560  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 09:53:40.272518  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (760.434µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.274600  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.509229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.275173  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 09:53:40.276116  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (779.458µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.281009  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.097091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.281243  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 09:53:40.282730  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.201319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.284966  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.67373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.285245  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 09:53:40.286468  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (924.894µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.288730  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.627917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.289093  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 09:53:40.289964  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (687.694µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.291727  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.272571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.291974  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 09:53:40.293017  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (874.976µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.297533  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.068305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.298023  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 09:53:40.299076  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (862.665µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.300780  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.309783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.300990  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 09:53:40.301841  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (676.108µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.303661  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.347621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.303856  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 09:53:40.304811  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (720.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.306609  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.376634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.306812  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 09:53:40.307778  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (704.242µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.310060  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.310404  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:53:40.311258  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (671.863µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.313075  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.401068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.313275  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 09:53:40.314415  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (924.813µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.316021  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.29062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.316327  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 09:53:40.318511  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.859954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.320746  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.444108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.321029  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 09:53:40.322119  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (806.583µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.323970  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.431816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.324148  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 09:53:40.325293  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (927.606µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.327238  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.374562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.327682  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 09:53:40.328757  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (759.255µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.331372  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.160977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.331740  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:53:40.332502  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.332543  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.332575  108095 httplog.go:90] GET /healthz: (1.092875ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:40.333451  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (964.774µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.335200  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.258928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.335742  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 09:53:40.336630  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (643.291µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.339243  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.933389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.339570  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:53:40.340553  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (705.765µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.343609  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.603818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.343982  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 09:53:40.345131  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (903.355µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.347044  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.347072  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.347103  108095 httplog.go:90] GET /healthz: (778.874µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.347735  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.235342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.347947  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:53:40.349108  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (996.763µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.350794  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.230868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.351105  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:53:40.352064  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (752.919µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.354047  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.282789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.354431  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:53:40.355641  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (798.189µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.358311  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.184941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.358624  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:53:40.360074  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.139546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.361978  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.505728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.362160  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:53:40.363040  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (687.499µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.364644  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.202662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.364835  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:53:40.365708  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (675.837µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.367547  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.476954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.367799  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:53:40.368768  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (769.229µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.370461  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.260938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.370703  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:53:40.371646  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (755.426µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.376153  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.155834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.376318  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:53:40.377817  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.380429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.381541  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.316352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.381842  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:53:40.382953  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (859.038µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.384805  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.460291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.385048  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:53:40.386067  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (832.295µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.388047  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.533579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.388315  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:53:40.389518  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (946.055µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.391505  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.391742  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:53:40.393055  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.109664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.398600  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.135589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.399012  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:53:40.400292  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (957.899µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.402589  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.402856  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:53:40.404271  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.065441ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.406742  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.876439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.407281  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:53:40.408567  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (961.707µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.410654  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.487897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.411125  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:53:40.412250  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (794.809µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.414105  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.408202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.414502  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:53:40.417584  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.356729ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.420220  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.750167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.427456  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:53:40.428734  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.034086ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.430987  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.697355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.431203  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:53:40.432341  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.432520  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.432773  108095 httplog.go:90] GET /healthz: (1.287207ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:40.432408  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (982.744µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.435330  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.58759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.436216  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:53:40.437290  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (809.119µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.439307  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.300913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.439649  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:53:40.441259  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (614.925µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.443062  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.308714ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.443413  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:53:40.444197  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (627.083µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.447078  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.447215  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.447401  108095 httplog.go:90] GET /healthz: (1.019317ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.455112  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.09819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.462948  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:53:40.474164  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.221208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.495038  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.937367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.496151  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:53:40.514816  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.555453ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.532771  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.532811  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.532860  108095 httplog.go:90] GET /healthz: (1.290407ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:40.535252  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.020069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.535526  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:53:40.547574  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.547614  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.547660  108095 httplog.go:90] GET /healthz: (1.199214ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.554344  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.395541ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.575720  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.662612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.576002  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 09:53:40.594393  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.370475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.615307  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.222118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.615798  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 09:53:40.632671  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.632700  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.632749  108095 httplog.go:90] GET /healthz: (1.162441ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:40.634157  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.079799ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.647573  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.647603  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.647643  108095 httplog.go:90] GET /healthz: (1.175457ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.655451  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.295231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.655698  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 09:53:40.674688  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.521934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.695383  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.223704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.695669  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:53:40.714270  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.24952ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.733434  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.733464  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.733511  108095 httplog.go:90] GET /healthz: (2.013546ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:40.735484  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.221112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.735764  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 09:53:40.747558  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.747597  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.747638  108095 httplog.go:90] GET /healthz: (1.219691ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.754169  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.245747ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.775100  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.106289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.775347  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:53:40.794442  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.428341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.815402  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.338446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.817763  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 09:53:40.833251  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.833284  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.833318  108095 httplog.go:90] GET /healthz: (1.710761ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:40.834198  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.044471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.847199  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.847227  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.847286  108095 httplog.go:90] GET /healthz: (988.176µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.854757  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.829819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.855140  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:53:40.874773  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.64752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.895675  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.60684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.896342  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:53:40.915635  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.28951ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.932525  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.932557  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.932601  108095 httplog.go:90] GET /healthz: (1.11268ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:40.935614  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.04909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.935816  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 09:53:40.954023  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:40.954056  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:40.954092  108095 httplog.go:90] GET /healthz: (7.68762ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:40.959575  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (6.081937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.975285  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.190909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:40.975551  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:53:40.994333  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.300417ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.015481  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.371473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.015716  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:53:41.032603  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.032633  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.032684  108095 httplog.go:90] GET /healthz: (1.079033ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:41.037064  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.780275ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.047385  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.047427  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.047473  108095 httplog.go:90] GET /healthz: (1.102288ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.055013  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.94146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.055235  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:53:41.074473  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.429714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.095718  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.658517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.096023  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:53:41.114369  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.356703ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.132576  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.132623  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.132679  108095 httplog.go:90] GET /healthz: (1.134458ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:41.134716  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.67094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.135048  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:53:41.147373  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.147406  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.147470  108095 httplog.go:90] GET /healthz: (1.036561ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.154188  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.26722ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.175243  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.232249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.175520  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:53:41.194644  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.618781ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.215497  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.474639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.216021  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:53:41.236675  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.236713  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.236782  108095 httplog.go:90] GET /healthz: (1.625107ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:41.236811  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.656992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.248099  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.248130  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.248167  108095 httplog.go:90] GET /healthz: (1.548188ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.254494  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.603176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.255018  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:53:41.274755  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.731244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.295468  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.398703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.295761  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:53:41.314598  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.554486ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.333088  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.333130  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.333186  108095 httplog.go:90] GET /healthz: (1.476664ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:41.335562  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.178244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.336055  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:53:41.348857  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.348905  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.348969  108095 httplog.go:90] GET /healthz: (2.548524ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.354325  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.274865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.375038  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.0918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.375313  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:53:41.394438  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.383757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.415282  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.285054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.415547  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:53:41.433510  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.433537  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.433578  108095 httplog.go:90] GET /healthz: (2.058228ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:41.434509  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.653412ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.447423  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.447452  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.447489  108095 httplog.go:90] GET /healthz: (1.03279ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.454996  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.069279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.455256  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:53:41.474484  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.436829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.495347  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.366508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.495597  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:53:41.514445  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.403532ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.532650  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.532680  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.532748  108095 httplog.go:90] GET /healthz: (1.17212ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:41.535085  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.796003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.535305  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:53:41.547639  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.547668  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.547713  108095 httplog.go:90] GET /healthz: (1.192305ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.554207  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.334101ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.575160  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.111045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.575406  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:53:41.594490  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.443989ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.615280  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.256532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.615550  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:53:41.634182  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.201201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.634182  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.634240  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.634285  108095 httplog.go:90] GET /healthz: (1.197822ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:41.648837  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.648890  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.648951  108095 httplog.go:90] GET /healthz: (2.527503ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.655130  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.200555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.655361  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:53:41.674476  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.404974ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.696072  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.041458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.696363  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:53:41.714433  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.392178ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.736502  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.736540  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.736595  108095 httplog.go:90] GET /healthz: (1.934006ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:41.737442  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.793953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.738041  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:53:41.747448  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.747479  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.747527  108095 httplog.go:90] GET /healthz: (1.020758ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.754036  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.185488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.775369  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.349271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.775780  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:53:41.794379  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.348186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.815294  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.284442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.815518  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:53:41.835368  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.835396  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.835430  108095 httplog.go:90] GET /healthz: (1.136887ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:41.835655  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.841643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.847140  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.847188  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.847225  108095 httplog.go:90] GET /healthz: (868.362µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.857549  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.537697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.857850  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:53:41.874690  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.652991ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.895402  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.369793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.895857  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:53:41.914537  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.411445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:41.933448  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.933489  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.933526  108095 httplog.go:90] GET /healthz: (1.70256ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:41.935289  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.146197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.935595  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:53:41.947588  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:41.947629  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:41.947671  108095 httplog.go:90] GET /healthz: (1.257326ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.954073  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.127913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.975123  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.13525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.975663  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:53:41.994477  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.335855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:41.997871  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.641476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.015521  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.465122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.015769  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 09:53:42.032604  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.032646  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.032694  108095 httplog.go:90] GET /healthz: (1.193125ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:42.034088  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.220931ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.035682  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.136057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.047527  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.047563  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.047619  108095 httplog.go:90] GET /healthz: (1.215353ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.055111  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.174708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.055611  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:53:42.074670  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.678152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.076876  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.747869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.095441  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.423622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.095725  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:53:42.117782  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.414735ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.122411  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.094718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.133072  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.133098  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.133144  108095 httplog.go:90] GET /healthz: (1.659857ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:42.136138  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.177264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.136643  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:53:42.147584  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.147613  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.147650  108095 httplog.go:90] GET /healthz: (1.255275ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.154231  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.345476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.156701  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.797323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.176187  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.144827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.178285  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:53:42.194687  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.660911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.196805  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.345537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.215888  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.866433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.216470  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:53:42.232841  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.232873  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.232916  108095 httplog.go:90] GET /healthz: (1.337033ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:42.234775  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.073494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.236815  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.477263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.250234  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.250277  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.250313  108095 httplog.go:90] GET /healthz: (1.210942ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.255228  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.26125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.255533  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:53:42.274799  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.722519ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.277457  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.125952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.295714  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.697811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.296014  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 09:53:42.314526  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.4731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.316412  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.311956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.334146  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.334188  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.334235  108095 httplog.go:90] GET /healthz: (2.606433ms) 0 [Go-http-client/1.1 127.0.0.1:45448]
I0919 09:53:42.337154  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.956526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.337549  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:53:42.347571  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.347606  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.347654  108095 httplog.go:90] GET /healthz: (1.172698ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.354367  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.009462ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.356966  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.152462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.375904  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.669815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.376204  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:53:42.395115  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (2.026736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.397928  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.316367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.415737  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.685011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.416011  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:53:42.432605  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.432654  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.432700  108095 httplog.go:90] GET /healthz: (1.190047ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:42.434246  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.127254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.436361  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.463147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.447548  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.447584  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.447626  108095 httplog.go:90] GET /healthz: (1.204677ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.455612  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.564498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.455920  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:53:42.474236  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.284573ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.476161  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.339806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.495420  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.320132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.495986  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:53:42.514776  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.508387ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.517176  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.482189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.532719  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:53:42.532753  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:53:42.532792  108095 httplog.go:90] GET /healthz: (1.263159ms) 0 [Go-http-client/1.1 127.0.0.1:45450]
I0919 09:53:42.535186  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.974641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.535482  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:53:42.547428  108095 httplog.go:90] GET /healthz: (1.073431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.549252  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.500009ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.551525  108095 httplog.go:90] POST /api/v1/namespaces: (1.88775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.553085  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.112634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.557500  108095 httplog.go:90] POST /api/v1/namespaces/default/services: (4.067018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.559489  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.177213ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.564627  108095 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (4.122315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.633428  108095 httplog.go:90] GET /healthz: (1.760499ms) 200 [Go-http-client/1.1 127.0.0.1:45450]
W0919 09:53:42.634490  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634544  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634562  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634589  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634598  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634610  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634618  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634629  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634639  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634651  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:53:42.634746  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:53:42.634766  108095 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 09:53:42.634777  108095 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 09:53:42.634981  108095 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 09:53:42.635220  108095 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 09:53:42.635235  108095 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 09:53:42.636308  108095 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (744.954µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:53:42.637311  108095 get.go:251] Starting watch for /api/v1/pods, rv=30493 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=8m51s
I0919 09:53:42.735165  108095 shared_informer.go:227] caches populated
I0919 09:53:42.735206  108095 shared_informer.go:204] Caches are synced for scheduler 
I0919 09:53:42.735542  108095 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.735576  108095 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.735645  108095 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.735671  108095 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.735844  108095 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.735865  108095 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736089  108095 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736119  108095 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736141  108095 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736162  108095 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736417  108095 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736438  108095 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736626  108095 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736642  108095 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736761  108095 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.736778  108095 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.737064  108095 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.737081  108095 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.737644  108095 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.737692  108095 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 09:53:42.738887  108095 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (565.522µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:53:42.739237  108095 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (521.481µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45538]
I0919 09:53:42.739517  108095 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (991.965µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45524]
I0919 09:53:42.739810  108095 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30501 labels= fields= timeout=6m58s
I0919 09:53:42.740132  108095 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (482.892µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45526]
I0919 09:53:42.740564  108095 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (484.637µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45534]
I0919 09:53:42.740642  108095 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30502 labels= fields= timeout=5m27s
I0919 09:53:42.740907  108095 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (663.331µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45536]
I0919 09:53:42.741090  108095 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (426.567µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45528]
I0919 09:53:42.741312  108095 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30495 labels= fields= timeout=9m33s
I0919 09:53:42.741396  108095 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30491 labels= fields= timeout=9m59s
I0919 09:53:42.742230  108095 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30502 labels= fields= timeout=6m30s
I0919 09:53:42.742496  108095 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30502 labels= fields= timeout=7m54s
I0919 09:53:42.742612  108095 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (981.525µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45530]
I0919 09:53:42.743375  108095 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (1.764138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45532]
I0919 09:53:42.743481  108095 get.go:251] Starting watch for /api/v1/services, rv=30758 labels= fields= timeout=6m56s
I0919 09:53:42.745416  108095 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30491 labels= fields= timeout=5m27s
I0919 09:53:42.745414  108095 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30502 labels= fields= timeout=9m35s
I0919 09:53:42.745659  108095 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (453.741µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45540]
I0919 09:53:42.747804  108095 get.go:251] Starting watch for /api/v1/nodes, rv=30492 labels= fields= timeout=7m35s
I0919 09:53:42.835474  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835513  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835520  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835527  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835534  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835540  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835546  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835551  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835556  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835566  108095 shared_informer.go:227] caches populated
I0919 09:53:42.835575  108095 shared_informer.go:227] caches populated
I0919 09:53:42.838985  108095 httplog.go:90] POST /api/v1/nodes: (2.362622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:42.839373  108095 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0919 09:53:42.848841  108095 httplog.go:90] PUT /api/v1/nodes/testnode/status: (9.402824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:42.852195  108095 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pidpressure-fake-name
I0919 09:53:42.852215  108095 scheduler.go:530] Attempting to schedule pod: node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pidpressure-fake-name
I0919 09:53:42.852358  108095 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pidpressure-fake-name", node "testnode"
I0919 09:53:42.852372  108095 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods: (2.702172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:42.852375  108095 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0919 09:53:42.852499  108095 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0919 09:53:42.855137  108095 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name/binding: (2.351499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:42.855402  108095 scheduler.go:662] pod node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0919 09:53:42.857712  108095 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/events: (1.720142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
E0919 09:53:42.859193  108095 factory.go:590] Error getting pod permit-plugin5052de6b-1963-44f6-970a-8702b6b1a0b9/test-pod for retry: Get http://127.0.0.1:35645/api/v1/namespaces/permit-plugin5052de6b-1963-44f6-970a-8702b6b1a0b9/pods/test-pod: dial tcp 127.0.0.1:35645: connect: connection refused; retrying...
I0919 09:53:42.957258  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.771124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.054875  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.67132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.155326  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.149234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.255099  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.808507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.355225  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.016994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.455003  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.867443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.555794  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.324289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.655132  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.925789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.741149  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:43.741528  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:43.741575  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:43.743387  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:43.745256  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:43.747670  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:43.755718  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.519987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.855439  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.278071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:43.955314  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.141975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.055330  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.109104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.155543  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.062686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.254850  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.709224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.354799  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.64313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.455371  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.183091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.555534  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.356242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.654847  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.693017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.741389  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:44.741651  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:44.744055  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:44.744096  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:44.745408  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:44.747869  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:44.755153  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.871306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.855113  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.89353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:44.955253  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.032471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.055772  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.22622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.155099  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.926572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.255203  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.81455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.355528  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.189636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.455107  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.885944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.554880  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.754059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.654882  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.710585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.741525  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:45.741806  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:45.744228  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:45.744302  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:45.745563  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:45.748028  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:45.754899  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.738785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.859194  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.79454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:45.955102  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.714275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.055143  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.952593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.154902  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.721696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.254928  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.754413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.355455  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.274814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.456103  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.889923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.554877  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.742728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.654986  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.86331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.741628  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:46.741927  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:46.744417  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:46.744557  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:46.745715  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:46.748175  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:46.755016  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.914123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.855366  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.217393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:46.955064  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.894177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.054909  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.761546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.155048  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.765559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.255517  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.329524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.355612  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.385139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.455541  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.250701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.555243  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.971511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.655629  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.347408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.741815  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:47.742245  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:47.744620  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:47.744697  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:47.745893  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:47.748366  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:47.755062  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.853751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.854969  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.635831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:47.954966  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.829502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.055308  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.051175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.154966  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.873057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.254909  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.77851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.354872  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.703232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.455269  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.006946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.562712  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (9.479993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.655507  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.294662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.742038  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:48.742624  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:48.744782  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:48.744883  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:48.746057  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:48.748555  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:48.755121  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.92437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.855165  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.936189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:48.955122  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.963707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.060774  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (7.610961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.155211  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.813911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.255442  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.337178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.355030  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.750864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.455502  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.19972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.554923  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.734754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.658017  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.69794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.742301  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:49.742756  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:49.744995  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:49.744998  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:49.746429  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:49.748776  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:49.755196  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.813875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.855673  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.446234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:49.955213  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.058006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:50.055604  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.345914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:50.155055  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.921423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:50.255048  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.877507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:50.354988  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.760474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:50.455299  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.090788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:50.559652  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.838041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
E0919 09:53:50.596236  108095 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:35645/apis/events.k8s.io/v1beta1/namespaces/permit-plugin5052de6b-1963-44f6-970a-8702b6b1a0b9/events: dial tcp 127.0.0.1:35645: connect: connection refused' (may retry after sleeping)
I0919 09:53:50.655080  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.91522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:50.742560  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:50.742997  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:50.745197  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:50.745295  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:50.746613  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:50.748893  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:50.755762  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.185653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:50.855226  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.986151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:50.954842  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.684382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.055240  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.055278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.155324  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.144807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.255659  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.473649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.355208  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.000221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.455881  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.29611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.555132  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.883521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.655195  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.000217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.742751  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:51.743166  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:51.745419  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:51.745577  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:51.746777  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:51.749218  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:51.761950  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (8.782856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.855065  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.819534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:51.955324  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.181528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:52.055750  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.578961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:52.156066  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.841989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:52.255643  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.378531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:52.357162  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (3.934717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:52.455850  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.202792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:52.550182  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.88239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:52.552168  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.472534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:52.553915  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.238044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:53:52.555790  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.731838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:52.655076  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.893433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:52.742998  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:52.743367  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:52.745618  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:52.745739  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:52.747088  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:52.749491  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:52.754879  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.7448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:52.855372  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.911999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:52.955025  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.782978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.055159  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.911134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.156508  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.907186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.254974  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.627675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.355340  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.141363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.455830  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.358636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.555324  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.0861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.657176  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (3.364598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.743244  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:53.743561  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:53.745827  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:53.745978  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:53.747313  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:53.749715  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:53.755412  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.23427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.855659  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.387559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:53.955950  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.489251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.058970  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (5.726804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.159002  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (5.746471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.258838  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.676152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.355219  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.985317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.456631  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.574398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.557851  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (4.700637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.655846  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.586435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.743413  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:54.743722  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:54.746026  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:54.746124  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:54.747482  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:54.749916  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:54.756586  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (3.322335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.857659  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (3.492856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:54.956446  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (3.138483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.058696  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (5.258952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.158214  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (4.959518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.281070  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (24.25179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.356411  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (3.206631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.458536  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (5.381221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.555401  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.235691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.655886  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.131634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.743590  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:55.743874  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:55.746198  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:55.746305  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:55.747653  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:55.750111  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:55.755814  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.016394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.855549  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.29635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:55.955294  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.08426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.055212  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.039237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.155573  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.35561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.255610  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.261214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.355068  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.886155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.455072  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.908974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.555109  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.905871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.654974  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.804854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.743877  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:56.744328  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:56.746417  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:56.746411  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:56.748562  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:56.750319  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:56.755999  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.815312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.855391  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.141327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:56.955242  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.092933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.055463  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.003871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.159493  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (3.050134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.255598  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.372587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.355339  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.987837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.455197  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.917667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.555288  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.099386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.655604  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.421046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.744059  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:57.744528  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:57.746626  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:57.746729  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:57.748681  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:57.750510  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:57.754696  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.58222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.855048  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.832428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:57.954968  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.739669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.055262  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.065612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.154496  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.435708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.255597  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.450484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.354752  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.592601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.455630  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.487101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.555349  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.898906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.655023  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.908973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.744234  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:58.744615  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:58.746817  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:58.746832  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:58.748859  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:58.750694  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:58.754900  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.783506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.855095  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.928136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:58.955528  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.183648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.055960  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.75189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.155657  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.478394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.255415  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.231757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.355080  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.891257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.455169  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.840461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.555069  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.933525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.655519  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.194688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.744490  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:59.744803  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:59.747005  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:59.747005  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:59.749050  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:59.750862  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:53:59.758397  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (4.68052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.855365  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.104589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:53:59.955247  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.974827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.054931  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.748389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.156298  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (3.11043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.254861  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.710241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.355361  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.152854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.458060  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (4.914531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.554854  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.699521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.654532  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.392392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.744680  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:00.745535  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:00.747815  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:00.747917  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:00.749218  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:00.751002  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:00.755152  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.589295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.855084  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.838634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:00.955116  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.76507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.055249  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.851854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.155279  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.043753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.257828  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (4.630166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.355084  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.876181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.456906  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.509307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.554790  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.665103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.655301  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.992155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.744839  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:01.745704  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:01.748057  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:01.748187  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:01.749370  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:01.751184  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:01.755123  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.959474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.855391  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.238052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:01.955058  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.869521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.054977  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.802532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.155131  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.86693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.255018  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.708296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
E0919 09:54:02.322465  108095 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:35645/apis/events.k8s.io/v1beta1/namespaces/permit-plugin5052de6b-1963-44f6-970a-8702b6b1a0b9/events: dial tcp 127.0.0.1:35645: connect: connection refused' (may retry after sleeping)
I0919 09:54:02.355014  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.842123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.455199  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.003814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.550308  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.386395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.551990  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.27483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.554498  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.464679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:54:02.555373  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.050107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.654900  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.725106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.744999  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:02.745896  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:02.748219  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:02.748350  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:02.749486  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:02.751286  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:02.754747  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.603039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.855387  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.214099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:02.955169  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.966225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.061462  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.020706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.155164  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.980007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.255419  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.203194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.355216  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.941625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.454998  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.783012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.558742  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.110183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.655172  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.02509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.745207  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:03.746035  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:03.748879  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:03.749015  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:03.750422  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:03.751437  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:03.755625  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.946395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.854697  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.475269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:03.955047  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.888793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.055629  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.424934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.155150  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.009338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.255122  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.937358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.355441  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.31143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.455017  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.790522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.554929  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.769502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.654608  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.518798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.745382  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:04.746146  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:04.749612  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:04.749682  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:04.750529  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:04.751619  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:04.754566  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.484414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.854909  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.739456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:04.955285  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.067154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.055206  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.786987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.154922  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.708257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.257542  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.767106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.355168  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.9248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.455388  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.083069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.555497  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.35212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.654650  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.503905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.746086  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:05.746306  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:05.750462  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:05.750504  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:05.750739  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:05.751871  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:05.754883  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.675746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.855154  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.004523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:05.954971  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.726704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.054694  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.548143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.155376  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.231212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.254488  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.352533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.354815  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.603445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.454920  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.740345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.555573  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.457333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.654823  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.706339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.746296  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:06.746443  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:06.750856  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:06.750910  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:06.751213  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:06.752027  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:06.755023  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.438271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.854515  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.445137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:06.955211  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.919075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.055271  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.062817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.155234  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.088226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.254869  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.634703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.354493  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.365357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.454924  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.730157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.554580  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.48899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.654924  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.659228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.746484  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:07.746564  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:07.751056  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:07.751056  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:07.752035  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:07.752179  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:07.754721  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.577388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.854802  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.65485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:07.955024  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.884666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:08.054845  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.718007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:08.154899  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.79372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:08.255321  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.170269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:08.355144  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.569157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:08.454900  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.734866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
E0919 09:54:08.459777  108095 factory.go:590] Error getting pod permit-plugin5052de6b-1963-44f6-970a-8702b6b1a0b9/test-pod for retry: Get http://127.0.0.1:35645/api/v1/namespaces/permit-plugin5052de6b-1963-44f6-970a-8702b6b1a0b9/pods/test-pod: dial tcp 127.0.0.1:35645: connect: connection refused; retrying...
I0919 09:54:08.555187  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.021184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:08.655026  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.922562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:08.746676  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:08.746676  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:08.751239  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:08.751300  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:08.752241  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:08.752325  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:08.754786  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.70753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:08.855703  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.465585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:08.955167  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.951397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.054773  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.535856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.154982  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.755367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.255183  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.001999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.355269  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.034552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.455096  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.865459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.555771  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.345485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.654747  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.526815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.747155  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:09.747412  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:09.752453  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:09.752582  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:09.753472  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:09.753518  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:09.755707  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.968989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.863840  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (10.157083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:09.960613  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (7.338678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.055136  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.968599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.155567  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.236409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.254977  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.683091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.355143  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.888281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.455681  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.75825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.554927  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.814471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.654866  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.741842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.747359  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:10.747524  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:10.752642  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:10.752815  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:10.755229  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.817808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.755413  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:10.755494  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:10.858524  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.682194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:10.956003  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.77801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.055275  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.075221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.156213  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.817938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.254789  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.642539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.354845  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.682695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.455003  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.779294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.555137  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.709014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.654853  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.749988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.747668  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:11.747717  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:11.752843  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:11.752965  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:11.754877  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.704372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.755628  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:11.755630  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:11.856149  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (2.983099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:11.954957  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.725978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:12.055052  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.890411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:12.154827  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.644568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:12.255119  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.901755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:12.354901  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.611824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:12.454975  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.731475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:12.550819  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.789174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:12.552862  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.292783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:12.554595  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.134211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45856]
I0919 09:54:12.554986  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.584763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:54:12.654857  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.703564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:54:12.747872  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:12.747931  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:12.753687  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:12.753799  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:12.755021  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.86816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:54:12.755732  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:12.755791  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:54:12.854893  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.737109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:54:12.857138  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.748308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:54:12.864148  108095 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (6.550693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:54:12.866790  108095 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pods/pidpressure-fake-name: (1.072219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
E0919 09:54:12.867648  108095 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0919 09:54:12.868082  108095 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30491&timeout=9m59s&timeoutSeconds=599&watch=true: (30.126920621s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45526]
I0919 09:54:12.868766  108095 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30502&timeout=6m30s&timeoutSeconds=390&watch=true: (30.127262933s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45534]
I0919 09:54:12.868925  108095 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30502&timeout=7m54s&timeoutSeconds=474&watch=true: (30.127431216s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45528]
I0919 09:54:12.869057  108095 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30758&timeout=6m56s&timeoutSeconds=416&watch=true: (30.127428985s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45536]
I0919 09:54:12.869175  108095 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30502&timeout=9m35s&timeoutSeconds=575&watch=true: (30.124275829s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45530]
I0919 09:54:12.869283  108095 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30491&timeout=5m27s&timeoutSeconds=327&watch=true: (30.12411906s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45532]
I0919 09:54:12.869388  108095 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30492&timeout=7m35s&timeoutSeconds=455&watch=true: (30.121826004s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45540]
I0919 09:54:12.869503  108095 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30493&timeoutSeconds=531&watch=true: (30.232631067s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45450]
I0919 09:54:12.869774  108095 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30502&timeout=5m27s&timeoutSeconds=327&watch=true: (30.129332908s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45538]
I0919 09:54:12.869964  108095 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30495&timeout=9m33s&timeoutSeconds=573&watch=true: (30.128907461s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45524]
I0919 09:54:12.870919  108095 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30501&timeout=6m58s&timeoutSeconds=418&watch=true: (30.131318592s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45448]
I0919 09:54:12.874444  108095 httplog.go:90] DELETE /api/v1/nodes: (4.846397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:54:12.874700  108095 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 09:54:12.876194  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.081827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
I0919 09:54:12.878437  108095 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.663286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45542]
--- FAIL: TestNodePIDPressure (33.84s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-094603.xml

Find node-pid-pressure77488f17-48e2-4a27-8298-cce5d48791f5/pidpressure-fake-name mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.19s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W0919 09:55:53.997791  108095 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 09:55:53.997812  108095 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 09:55:53.997824  108095 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 09:55:53.997832  108095 master.go:259] Using reconciler: 
I0919 09:55:53.999955  108095 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.000419  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.000452  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.002165  108095 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 09:55:54.002203  108095 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.002225  108095 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 09:55:54.002544  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.002564  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.003613  108095 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:55:54.003660  108095 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.003700  108095 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:55:54.003798  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.003819  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.005326  108095 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 09:55:54.005383  108095 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.005450  108095 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 09:55:54.005584  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.005617  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.006563  108095 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 09:55:54.006728  108095 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 09:55:54.006787  108095 watch_cache.go:405] Replace watchCache (rev: 50719) 
I0919 09:55:54.006775  108095 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.007007  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.007031  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.009365  108095 watch_cache.go:405] Replace watchCache (rev: 50719) 
I0919 09:55:54.010318  108095 watch_cache.go:405] Replace watchCache (rev: 50718) 
I0919 09:55:54.010566  108095 watch_cache.go:405] Replace watchCache (rev: 50718) 
I0919 09:55:54.011388  108095 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 09:55:54.011535  108095 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 09:55:54.011860  108095 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.012881  108095 watch_cache.go:405] Replace watchCache (rev: 50720) 
I0919 09:55:54.013275  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.013701  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.015181  108095 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 09:55:54.015412  108095 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 09:55:54.015417  108095 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.015802  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.015827  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.016685  108095 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 09:55:54.016769  108095 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 09:55:54.016896  108095 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.017120  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.017154  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.017919  108095 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 09:55:54.018133  108095 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 09:55:54.018138  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.018300  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.018321  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.020051  108095 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 09:55:54.020221  108095 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 09:55:54.020211  108095 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.020556  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.020581  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.021456  108095 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 09:55:54.021601  108095 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 09:55:54.021659  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.021823  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.021844  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.021992  108095 watch_cache.go:405] Replace watchCache (rev: 50721) 
I0919 09:55:54.022054  108095 watch_cache.go:405] Replace watchCache (rev: 50721) 
I0919 09:55:54.022054  108095 watch_cache.go:405] Replace watchCache (rev: 50721) 
I0919 09:55:54.022265  108095 watch_cache.go:405] Replace watchCache (rev: 50721) 
I0919 09:55:54.022606  108095 watch_cache.go:405] Replace watchCache (rev: 50721) 
I0919 09:55:54.023747  108095 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 09:55:54.023772  108095 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 09:55:54.023931  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.024125  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.024148  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.024916  108095 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 09:55:54.024975  108095 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 09:55:54.025142  108095 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.025270  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.025287  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.025927  108095 watch_cache.go:405] Replace watchCache (rev: 50721) 
I0919 09:55:54.026284  108095 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 09:55:54.026361  108095 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 09:55:54.026477  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.026633  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.026657  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.027471  108095 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 09:55:54.027488  108095 watch_cache.go:405] Replace watchCache (rev: 50721) 
I0919 09:55:54.027508  108095 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 09:55:54.027513  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.027733  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.027760  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.028795  108095 watch_cache.go:405] Replace watchCache (rev: 50721) 
I0919 09:55:54.030087  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.030487  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.030500  108095 watch_cache.go:405] Replace watchCache (rev: 50722) 
I0919 09:55:54.031704  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.032102  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.032269  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.033521  108095 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 09:55:54.033546  108095 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 09:55:54.033600  108095 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 09:55:54.034466  108095 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.034855  108095 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.035412  108095 watch_cache.go:405] Replace watchCache (rev: 50723) 
I0919 09:55:54.035642  108095 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.036402  108095 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.037102  108095 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.037865  108095 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.038286  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.038399  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.038706  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.039368  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.040006  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.040413  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.041485  108095 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.041984  108095 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.042548  108095 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.043067  108095 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.043890  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.044290  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.044691  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.045260  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.045514  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.045861  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.046206  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.046979  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.047672  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.048875  108095 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.050094  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.050475  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.051021  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.051871  108095 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.053042  108095 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.054747  108095 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.056413  108095 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.057273  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.057902  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.058137  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.058338  108095 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 09:55:54.058437  108095 master.go:461] Enabling API group "authentication.k8s.io".
I0919 09:55:54.058482  108095 master.go:461] Enabling API group "authorization.k8s.io".
I0919 09:55:54.058684  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.058990  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.059127  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.061145  108095 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:55:54.061367  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.061536  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.061566  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.061695  108095 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:55:54.062925  108095 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:55:54.063091  108095 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:55:54.063141  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.063311  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.063333  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.063991  108095 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:55:54.064019  108095 master.go:461] Enabling API group "autoscaling".
I0919 09:55:54.064037  108095 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:55:54.064182  108095 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.064367  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.064394  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.065402  108095 watch_cache.go:405] Replace watchCache (rev: 50729) 
I0919 09:55:54.065475  108095 watch_cache.go:405] Replace watchCache (rev: 50729) 
I0919 09:55:54.065853  108095 watch_cache.go:405] Replace watchCache (rev: 50729) 
I0919 09:55:54.065873  108095 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 09:55:54.065856  108095 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 09:55:54.066179  108095 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.066345  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.066375  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.066667  108095 watch_cache.go:405] Replace watchCache (rev: 50729) 
I0919 09:55:54.067557  108095 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 09:55:54.067585  108095 master.go:461] Enabling API group "batch".
I0919 09:55:54.067694  108095 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 09:55:54.067767  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.067918  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.067972  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.068654  108095 watch_cache.go:405] Replace watchCache (rev: 50729) 
I0919 09:55:54.069321  108095 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 09:55:54.069350  108095 master.go:461] Enabling API group "certificates.k8s.io".
I0919 09:55:54.069506  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.069671  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.069691  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.069709  108095 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 09:55:54.071168  108095 watch_cache.go:405] Replace watchCache (rev: 50730) 
I0919 09:55:54.071580  108095 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:55:54.071615  108095 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:55:54.071991  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.072250  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.072305  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.072629  108095 watch_cache.go:405] Replace watchCache (rev: 50730) 
I0919 09:55:54.073818  108095 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:55:54.073848  108095 master.go:461] Enabling API group "coordination.k8s.io".
I0919 09:55:54.073866  108095 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 09:55:54.073896  108095 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:55:54.074047  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.074226  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.074257  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.075104  108095 watch_cache.go:405] Replace watchCache (rev: 50730) 
I0919 09:55:54.075266  108095 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:55:54.075302  108095 master.go:461] Enabling API group "extensions".
I0919 09:55:54.075468  108095 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.075616  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.075651  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.075656  108095 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:55:54.076478  108095 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 09:55:54.076629  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.076768  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.076789  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.076870  108095 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 09:55:54.077549  108095 watch_cache.go:405] Replace watchCache (rev: 50731) 
I0919 09:55:54.077885  108095 watch_cache.go:405] Replace watchCache (rev: 50731) 
I0919 09:55:54.079351  108095 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:55:54.079396  108095 master.go:461] Enabling API group "networking.k8s.io".
I0919 09:55:54.079544  108095 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:55:54.079579  108095 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.079980  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.080101  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.081365  108095 watch_cache.go:405] Replace watchCache (rev: 50731) 
I0919 09:55:54.082080  108095 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 09:55:54.082102  108095 master.go:461] Enabling API group "node.k8s.io".
I0919 09:55:54.082238  108095 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 09:55:54.082269  108095 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.083019  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.083093  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.084278  108095 watch_cache.go:405] Replace watchCache (rev: 50731) 
I0919 09:55:54.084802  108095 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 09:55:54.084884  108095 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 09:55:54.085273  108095 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.085418  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.085442  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.086046  108095 watch_cache.go:405] Replace watchCache (rev: 50731) 
I0919 09:55:54.086390  108095 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 09:55:54.086416  108095 master.go:461] Enabling API group "policy".
I0919 09:55:54.086420  108095 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 09:55:54.086455  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.086634  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.086663  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.087492  108095 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:55:54.087575  108095 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:55:54.087629  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.087645  108095 watch_cache.go:405] Replace watchCache (rev: 50731) 
I0919 09:55:54.087781  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.087797  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.088449  108095 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:55:54.088490  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.088548  108095 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:55:54.088621  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.088641  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.088731  108095 watch_cache.go:405] Replace watchCache (rev: 50732) 
I0919 09:55:54.089447  108095 watch_cache.go:405] Replace watchCache (rev: 50732) 
I0919 09:55:54.089736  108095 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:55:54.089766  108095 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:55:54.089919  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.090402  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.090429  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.090568  108095 watch_cache.go:405] Replace watchCache (rev: 50732) 
I0919 09:55:54.091740  108095 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:55:54.091824  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.091872  108095 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:55:54.092167  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.092194  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.092816  108095 watch_cache.go:405] Replace watchCache (rev: 50732) 
I0919 09:55:54.093013  108095 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:55:54.093240  108095 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:55:54.093232  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.093406  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.093425  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.095123  108095 watch_cache.go:405] Replace watchCache (rev: 50732) 
I0919 09:55:54.096579  108095 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:55:54.096618  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.096767  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.096783  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.096854  108095 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:55:54.099458  108095 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:55:54.099637  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.099806  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.099825  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.099922  108095 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:55:54.101110  108095 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:55:54.101141  108095 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:55:54.101161  108095 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 09:55:54.103223  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.103417  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.103451  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.104344  108095 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:55:54.104514  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.104556  108095 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:55:54.104713  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.104731  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.105987  108095 watch_cache.go:405] Replace watchCache (rev: 50734) 
I0919 09:55:54.106005  108095 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:55:54.106031  108095 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 09:55:54.105988  108095 watch_cache.go:405] Replace watchCache (rev: 50734) 
I0919 09:55:54.106085  108095 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:55:54.106110  108095 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 09:55:54.106247  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.106338  108095 watch_cache.go:405] Replace watchCache (rev: 50734) 
I0919 09:55:54.106392  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.106409  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.108253  108095 watch_cache.go:405] Replace watchCache (rev: 50734) 
I0919 09:55:54.108394  108095 watch_cache.go:405] Replace watchCache (rev: 50734) 
I0919 09:55:54.109266  108095 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:55:54.109373  108095 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:55:54.109483  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.109885  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.109916  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.111050  108095 watch_cache.go:405] Replace watchCache (rev: 50734) 
I0919 09:55:54.111778  108095 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:55:54.111817  108095 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:55:54.111838  108095 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.112068  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.112106  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.113192  108095 watch_cache.go:405] Replace watchCache (rev: 50735) 
I0919 09:55:54.113563  108095 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 09:55:54.113615  108095 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 09:55:54.113609  108095 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.113790  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.113813  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.114916  108095 watch_cache.go:405] Replace watchCache (rev: 50735) 
I0919 09:55:54.115925  108095 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 09:55:54.116186  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.116241  108095 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 09:55:54.116509  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.116537  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.117396  108095 watch_cache.go:405] Replace watchCache (rev: 50736) 
I0919 09:55:54.118489  108095 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:55:54.118661  108095 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:55:54.118821  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.119018  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.119096  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.119895  108095 watch_cache.go:405] Replace watchCache (rev: 50736) 
I0919 09:55:54.119918  108095 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:55:54.120111  108095 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:55:54.120153  108095 master.go:461] Enabling API group "storage.k8s.io".
I0919 09:55:54.120311  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.120439  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.120468  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.121222  108095 watch_cache.go:405] Replace watchCache (rev: 50737) 
I0919 09:55:54.122461  108095 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 09:55:54.122728  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.122869  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.122872  108095 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 09:55:54.122900  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.123901  108095 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 09:55:54.124054  108095 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 09:55:54.124207  108095 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.124342  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.124362  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.125136  108095 watch_cache.go:405] Replace watchCache (rev: 50737) 
I0919 09:55:54.125557  108095 watch_cache.go:405] Replace watchCache (rev: 50738) 
I0919 09:55:54.125961  108095 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 09:55:54.126087  108095 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 09:55:54.126554  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.126719  108095 watch_cache.go:405] Replace watchCache (rev: 50738) 
I0919 09:55:54.126773  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.126828  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.127656  108095 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 09:55:54.127693  108095 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 09:55:54.127840  108095 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.128547  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.128579  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.128852  108095 watch_cache.go:405] Replace watchCache (rev: 50738) 
I0919 09:55:54.129232  108095 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 09:55:54.129259  108095 master.go:461] Enabling API group "apps".
I0919 09:55:54.129306  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.129332  108095 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 09:55:54.129453  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.129472  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.130808  108095 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:55:54.130921  108095 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:55:54.131168  108095 watch_cache.go:405] Replace watchCache (rev: 50739) 
I0919 09:55:54.131803  108095 watch_cache.go:405] Replace watchCache (rev: 50739) 
I0919 09:55:54.131013  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.132431  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.132528  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.133358  108095 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:55:54.133404  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.133443  108095 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:55:54.133535  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.133548  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.134599  108095 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:55:54.134649  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.134738  108095 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:55:54.134791  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.134807  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.134966  108095 watch_cache.go:405] Replace watchCache (rev: 50740) 
I0919 09:55:54.135882  108095 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:55:54.135894  108095 watch_cache.go:405] Replace watchCache (rev: 50740) 
I0919 09:55:54.135907  108095 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 09:55:54.135970  108095 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.136013  108095 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:55:54.136252  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.136269  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:54.137355  108095 watch_cache.go:405] Replace watchCache (rev: 50740) 
I0919 09:55:54.137359  108095 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:55:54.137391  108095 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:55:54.137471  108095 master.go:461] Enabling API group "events.k8s.io".
I0919 09:55:54.137715  108095 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.137922  108095 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.138202  108095 watch_cache.go:405] Replace watchCache (rev: 50740) 
I0919 09:55:54.138270  108095 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.138754  108095 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.138865  108095 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.138931  108095 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.139160  108095 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.139263  108095 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.139341  108095 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.139420  108095 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.140887  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.141396  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.142651  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.143809  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.145071  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.145438  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.146579  108095 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.147030  108095 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.148166  108095 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.148535  108095 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:55:54.148683  108095 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 09:55:54.149466  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.149816  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.150089  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.151212  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.151758  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.153278  108095 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.153755  108095 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.154641  108095 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.155360  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.156049  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.157028  108095 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:55:54.157213  108095 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 09:55:54.158105  108095 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.158483  108095 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.159579  108095 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.160615  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.161256  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.162086  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.162788  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.163538  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.163990  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.164687  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.165430  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:55:54.165491  108095 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 09:55:54.166033  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.166677  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:55:54.166754  108095 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 09:55:54.167450  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.167899  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.168134  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.168560  108095 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.169109  108095 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.169525  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.170105  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:55:54.170190  108095 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 09:55:54.171242  108095 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.171954  108095 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.172229  108095 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.173023  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.173294  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.173650  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.174327  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.174628  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.174880  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.175620  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.175842  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.176082  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:55:54.176156  108095 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 09:55:54.176162  108095 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 09:55:54.176759  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.177218  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.177857  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.178342  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.179111  108095 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"31574e0d-67e6-4b59-9ea1-5fd42713d68c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:55:54.182707  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.182746  108095 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 09:55:54.182801  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.182816  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.182824  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.182832  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.182868  108095 httplog.go:90] GET /healthz: (292.127µs) 0 [Go-http-client/1.1 127.0.0.1:38828]
I0919 09:55:54.184905  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.531798ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38828]
I0919 09:55:54.188990  108095 httplog.go:90] GET /api/v1/services: (1.497189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38828]
I0919 09:55:54.197112  108095 httplog.go:90] GET /api/v1/services: (1.56576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38828]
I0919 09:55:54.200395  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.200540  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.200601  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.200647  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.200695  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.200818  108095 httplog.go:90] GET /healthz: (556.698µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38828]
I0919 09:55:54.202493  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.126964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38830]
I0919 09:55:54.203107  108095 httplog.go:90] GET /api/v1/services: (1.214799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38828]
I0919 09:55:54.203319  108095 httplog.go:90] GET /api/v1/services: (1.037649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.205577  108095 httplog.go:90] POST /api/v1/namespaces: (1.90416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38830]
I0919 09:55:54.208056  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.068075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.210368  108095 httplog.go:90] POST /api/v1/namespaces: (1.888913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.212032  108095 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.288036ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.217586  108095 httplog.go:90] POST /api/v1/namespaces: (5.063481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.283797  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.283833  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.283846  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.283856  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.283866  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.283897  108095 httplog.go:90] GET /healthz: (250.222µs) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:54.301764  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.301803  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.301815  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.301823  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.301831  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.301868  108095 httplog.go:90] GET /healthz: (282.229µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.383808  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.383842  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.383855  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.383864  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.383874  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.383910  108095 httplog.go:90] GET /healthz: (266.495µs) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:54.401757  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.401802  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.401816  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.401828  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.401837  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.401890  108095 httplog.go:90] GET /healthz: (290.842µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.483967  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.484006  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.484019  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.484029  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.484038  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.484081  108095 httplog.go:90] GET /healthz: (296.904µs) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:54.502523  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.502565  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.502578  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.502587  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.502595  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.502636  108095 httplog.go:90] GET /healthz: (287.893µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.583758  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.583799  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.583808  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.583814  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.583821  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.583848  108095 httplog.go:90] GET /healthz: (236.887µs) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:54.601733  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.601767  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.601776  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.601782  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.601788  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.601810  108095 httplog.go:90] GET /healthz: (232.855µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.683767  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.683806  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.683819  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.683831  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.683840  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.683876  108095 httplog.go:90] GET /healthz: (275.677µs) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:54.701765  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.701804  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.701820  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.701830  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.701839  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.701882  108095 httplog.go:90] GET /healthz: (284.613µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.784143  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.784175  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.784931  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.784995  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.785003  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.785061  108095 httplog.go:90] GET /healthz: (1.082928ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:54.801712  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.801750  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.801763  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.801772  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.801779  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.801820  108095 httplog.go:90] GET /healthz: (265.843µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.883807  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.883840  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.883852  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.883862  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.883870  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.883904  108095 httplog.go:90] GET /healthz: (248.768µs) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:54.901742  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.901784  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.901797  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.901807  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.901816  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.901845  108095 httplog.go:90] GET /healthz: (266.14µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:54.983804  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:55:54.983845  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:54.983858  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:54.983867  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:54.983876  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:54.983906  108095 httplog.go:90] GET /healthz: (258.718µs) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:54.998109  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:55:54.998208  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:55:55.003126  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.003152  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:55.003159  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:55.003165  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:55.003202  108095 httplog.go:90] GET /healthz: (1.647322ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.084814  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.084850  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:55.084862  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:55.084872  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:55.084925  108095 httplog.go:90] GET /healthz: (1.274485ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:55.102868  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.102901  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:55.102913  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:55.102922  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:55.102999  108095 httplog.go:90] GET /healthz: (1.356012ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.184254  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.377936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38828]
I0919 09:55:55.184269  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.258107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.186047  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.186072  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:55:55.186081  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:55:55.186088  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:55:55.186135  108095 httplog.go:90] GET /healthz: (1.963798ms) 0 [Go-http-client/1.1 127.0.0.1:39134]
I0919 09:55:55.186161  108095 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.023962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.186334  108095 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.699149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38828]
I0919 09:55:55.186404  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.767343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.188383  108095 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.318213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39134]
I0919 09:55:55.188553  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.698602ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38828]
I0919 09:55:55.188790  108095 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.99677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.188826  108095 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 09:55:55.191066  108095 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.126085ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39134]
I0919 09:55:55.191069  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.944341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38828]
I0919 09:55:55.194169  108095 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.700859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.194414  108095 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 09:55:55.194439  108095 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 09:55:55.195830  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (4.369787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.198323  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.026433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.199638  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (912.31µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.205869  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.205910  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.205965  108095 httplog.go:90] GET /healthz: (1.39024ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.206015  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.03622ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.207524  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.077822ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.210253  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.13311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.213166  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.419686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.213763  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 09:55:55.215445  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.41882ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.218204  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.164794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.218428  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 09:55:55.219710  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (912.212µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.222301  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.911871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.222841  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 09:55:55.224119  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.090237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.226152  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.226618  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:55:55.228363  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.564745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.231214  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.367185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.231432  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 09:55:55.233207  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (991.346µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.237056  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.312649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.237487  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 09:55:55.241051  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.763527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.244800  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.303834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.245535  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 09:55:55.249380  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (3.476738ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.252631  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.788346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.253009  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 09:55:55.255825  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.603315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.259039  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.691726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.259328  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 09:55:55.260767  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.077095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.267196  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.508395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.267767  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 09:55:55.269209  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.157627ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.271488  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.86631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.271767  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 09:55:55.273423  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (875.273µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.276771  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.447881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.277096  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 09:55:55.282272  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (4.92394ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.285170  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.126661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.285424  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 09:55:55.285637  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.285662  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.285699  108095 httplog.go:90] GET /healthz: (2.15659ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:55.286791  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.137838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.289088  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.818443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.289449  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 09:55:55.290760  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (922.381µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.292959  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.637306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.293246  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 09:55:55.295344  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.710787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.297438  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.717206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.298332  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 09:55:55.301042  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (2.253016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.303124  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.303157  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.303198  108095 httplog.go:90] GET /healthz: (1.761728ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.304039  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.564278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.304320  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 09:55:55.309077  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (3.530983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.313111  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.311358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.313493  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:55:55.314717  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (890.127µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.317011  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.859583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.317388  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 09:55:55.321441  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (3.837773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.324005  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.091423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.324231  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 09:55:55.325787  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.204102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.328397  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.719616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.328828  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 09:55:55.330233  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.120054ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.334380  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.178759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.334697  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 09:55:55.336303  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.102236ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.338745  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.339026  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 09:55:55.340921  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.668462ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.343436  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.126111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.344444  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:55:55.349347  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (4.503832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.356204  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.201333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.356475  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 09:55:55.358757  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (2.015891ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.362517  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.949248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.363006  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:55:55.365372  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.90748ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.368178  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.002852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.368569  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 09:55:55.374304  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (5.524089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.377642  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.392093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.378116  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:55:55.384425  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (5.816991ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.385592  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.385625  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.385667  108095 httplog.go:90] GET /healthz: (1.703191ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:55.388324  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.967905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.389344  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:55:55.390764  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.037661ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.393346  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.896972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.393745  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:55:55.395236  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.29509ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.397733  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.875452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.398186  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:55:55.399491  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.082593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.402858  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.935979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.403124  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:55:55.404147  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.404187  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.404230  108095 httplog.go:90] GET /healthz: (1.284261ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.405118  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.82009ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.407580  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.982345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.407831  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:55:55.410443  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (2.132505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.412982  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.991748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.413233  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:55:55.414709  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.202749ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.418687  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.377401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.419718  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:55:55.421108  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.104707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.423746  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.143292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.424030  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:55:55.426165  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.036469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.429299  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.596176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.429553  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:55:55.431417  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.705246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.433624  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.760147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.434562  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:55:55.435919  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (936.011µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.438200  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.718702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.438475  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:55:55.439525  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (870.444µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.441648  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.757694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.442148  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:55:55.443554  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.031025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.471279  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (26.795829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.471875  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:55:55.473740  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.633264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.477107  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.905488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.477539  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:55:55.478692  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (952.167µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.481353  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.0802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.481534  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:55:55.482997  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.343517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.485081  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.485107  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.485141  108095 httplog.go:90] GET /healthz: (1.645071ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:55.485767  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.392964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.486044  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:55:55.487381  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.113069ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.490162  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.384665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.490397  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:55:55.492085  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.535946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.494260  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.740783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.494461  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:55:55.495796  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.146231ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.497972  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.606968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.498177  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:55:55.499455  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (964.573µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.502382  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.502653  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.502799  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.861306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.502892  108095 httplog.go:90] GET /healthz: (1.353754ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.503103  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:55:55.504581  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.291517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.506904  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.854023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.507863  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:55:55.509564  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.514029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.516965  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.575942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.517495  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:55:55.525909  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (7.955334ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.541091  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (14.521402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.541555  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:55:55.543643  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.487654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.546462  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.298574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.546789  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:55:55.548386  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.407041ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.551286  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.447992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.551498  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:55:55.560558  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (8.875283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.566435  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.559389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.566921  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 09:55:55.569276  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.108456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.572346  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.468429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.572612  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 09:55:55.594243  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (3.337039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.594583  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.594604  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.594664  108095 httplog.go:90] GET /healthz: (3.932539ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:55.607012  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.078301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.607340  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.607360  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.607395  108095 httplog.go:90] GET /healthz: (5.615841ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.607773  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 09:55:55.672184  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (10.689257ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.676417  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.026518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.676651  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:55:55.680908  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (3.759949ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.685487  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.685511  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.685553  108095 httplog.go:90] GET /healthz: (1.833819ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:55.686151  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.282205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.687157  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 09:55:55.712260  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.712297  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.712351  108095 httplog.go:90] GET /healthz: (10.770303ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.712870  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (9.985993ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.726342  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.363086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.726626  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:55:55.745379  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.280178ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.768031  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.023285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.769481  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 09:55:55.785954  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.785991  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.786032  108095 httplog.go:90] GET /healthz: (1.022562ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:55.787335  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.823348ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.803971  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.804004  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.804039  108095 httplog.go:90] GET /healthz: (2.482465ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.807408  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.878507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.807736  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:55:55.824194  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.139695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.844874  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.846796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.845314  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:55:55.864706  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.371491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.885380  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.885414  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.885422  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.392808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.885453  108095 httplog.go:90] GET /healthz: (1.956832ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:55.885688  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 09:55:55.902752  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.902785  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.902834  108095 httplog.go:90] GET /healthz: (1.22205ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:55.904229  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.329067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.925411  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.321406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.925642  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:55:55.944968  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.162427ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.965105  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.097078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.965592  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:55:55.984489  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.384331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:55.984971  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:55.985002  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:55.985036  108095 httplog.go:90] GET /healthz: (1.34737ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:56.008065  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.008100  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.008172  108095 httplog.go:90] GET /healthz: (5.314095ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.008576  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.629934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.008866  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:55:56.024133  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.11219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.045607  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.532016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.045836  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:55:56.064213  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.217439ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.085583  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.085612  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.085652  108095 httplog.go:90] GET /healthz: (1.530906ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:56.086236  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.234612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.086555  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:55:56.105522  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.141259ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.105806  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.105828  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.105863  108095 httplog.go:90] GET /healthz: (1.806729ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.125411  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.411789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.125681  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:55:56.145986  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (2.872152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.166613  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.830077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.166890  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:55:56.184690  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.184724  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.184740  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.707447ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.184761  108095 httplog.go:90] GET /healthz: (1.231992ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:56.205842  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.813934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.206175  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:55:56.206879  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.206908  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.206967  108095 httplog.go:90] GET /healthz: (1.900467ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.233732  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (7.302682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.249598  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.15072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.250214  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:55:56.266797  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (3.761132ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.284474  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.284506  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.284557  108095 httplog.go:90] GET /healthz: (1.061393ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:56.285708  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.376822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.286277  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:55:56.304397  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.304439  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.304479  108095 httplog.go:90] GET /healthz: (1.948913ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.304479  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.551457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.325401  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.319442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.326119  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:55:56.344867  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.827868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.365135  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.030052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.365415  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:55:56.384676  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.638795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.386035  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.386069  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.386119  108095 httplog.go:90] GET /healthz: (1.945109ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:56.402506  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.402544  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.402602  108095 httplog.go:90] GET /healthz: (983.029µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.404999  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.06556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.405710  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:55:56.428589  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (5.252628ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.445890  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.952482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.446120  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:55:56.464699  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.638124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.485039  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.012685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.485335  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:55:56.487416  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.487448  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.487513  108095 httplog.go:90] GET /healthz: (3.995017ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:56.503014  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.503048  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.503089  108095 httplog.go:90] GET /healthz: (1.5179ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.506566  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (3.648864ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.525569  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.518209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.525834  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:55:56.544182  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.184867ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.566250  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.077566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.566498  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:55:56.585437  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.585474  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.585539  108095 httplog.go:90] GET /healthz: (1.951505ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:56.585559  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.988291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.603052  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.603088  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.603138  108095 httplog.go:90] GET /healthz: (1.521533ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.605910  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.332768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.606380  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:55:56.624482  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.336248ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.645199  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.156234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.645515  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:55:56.665097  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.950362ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.685220  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.685254  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.685298  108095 httplog.go:90] GET /healthz: (1.133175ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:56.685889  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.841454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.686151  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:55:56.702757  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.702790  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.702841  108095 httplog.go:90] GET /healthz: (1.256157ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.704040  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.125752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.726394  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.298462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.726700  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:55:56.744389  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.269308ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.765656  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.573698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.766002  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:55:56.784128  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.106335ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.784650  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.784671  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.784700  108095 httplog.go:90] GET /healthz: (1.129504ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:56.803039  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.803212  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.803373  108095 httplog.go:90] GET /healthz: (1.698909ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.805510  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.503831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.805784  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:55:56.824666  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.586889ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.845530  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.415784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.845813  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:55:56.865166  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (2.250474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.886002  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.024991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:56.886327  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:55:56.886453  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.886503  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.886559  108095 httplog.go:90] GET /healthz: (2.626633ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:56.902682  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.902712  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.902748  108095 httplog.go:90] GET /healthz: (999.113µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.904468  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.363967ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.925308  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.233116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.925630  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:55:56.944837  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.780177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.947365  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.861911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.967108  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.536577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.967379  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 09:55:56.984540  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.502586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:56.984956  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:56.984991  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:56.985024  108095 httplog.go:90] GET /healthz: (1.351512ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:56.987105  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.088855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.007425  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.007453  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.007496  108095 httplog.go:90] GET /healthz: (5.836843ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.009749  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.338113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:57.010258  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:55:57.024613  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.521019ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:57.026749  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.622443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:57.045173  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.109448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:57.045638  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:55:57.068720  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.34899ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:57.072812  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.561748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:57.102582  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.102619  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.102664  108095 httplog.go:90] GET /healthz: (18.413277ms) 0 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:57.102758  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (19.588498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:57.104242  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.104271  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.104312  108095 httplog.go:90] GET /healthz: (1.757892ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39566]
I0919 09:55:57.104559  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:55:57.107676  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (2.809141ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.110568  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.359524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.125716  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.663881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.126013  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:55:57.144604  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.552943ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.146593  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.379303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.166988  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.692338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.167251  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:55:57.185112  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.185150  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.185189  108095 httplog.go:90] GET /healthz: (1.482045ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:57.185357  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.314505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.187223  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.407026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.202616  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.202672  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.202736  108095 httplog.go:90] GET /healthz: (1.050088ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.205200  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.930236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.205450  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:55:57.224491  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.376253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.227190  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.768849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.245252  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.224266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.245671  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 09:55:57.264620  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.495897ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.267572  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.15872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.284354  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.284380  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.284425  108095 httplog.go:90] GET /healthz: (861.701µs) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:57.285428  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.325066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.285809  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:55:57.303075  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.303106  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.303170  108095 httplog.go:90] GET /healthz: (1.635359ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.304833  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.171975ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.307776  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.193706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.325831  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.747388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.326215  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:55:57.345101  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.76142ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.348078  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.991352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.365879  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.738208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.366165  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:55:57.384839  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.384876  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.384919  108095 httplog.go:90] GET /healthz: (1.393037ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:57.384964  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.847635ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.387071  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.560672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.406116  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.04538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39132]
I0919 09:55:57.406388  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:55:57.406523  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.406543  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.406580  108095 httplog.go:90] GET /healthz: (5.005031ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.424622  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.536015ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.426906  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.755669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.445569  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.568039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.445870  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:55:57.464454  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.393865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.466749  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.881457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.485359  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:55:57.485393  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:55:57.485408  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.372497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.485428  108095 httplog.go:90] GET /healthz: (1.637857ms) 0 [Go-http-client/1.1 127.0.0.1:39132]
I0919 09:55:57.485736  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:55:57.503073  108095 httplog.go:90] GET /healthz: (1.424118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.505266  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.437833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.507794  108095 httplog.go:90] POST /api/v1/namespaces: (1.954113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.510235  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.649008ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.520686  108095 httplog.go:90] POST /api/v1/namespaces/default/services: (9.829347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.522764  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.458051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.524405  108095 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (937.816µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
E0919 09:55:57.524726  108095 controller.go:224] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0919 09:55:57.584929  108095 httplog.go:90] GET /healthz: (1.277465ms) 200 [Go-http-client/1.1 127.0.0.1:38832]
I0919 09:55:57.588698  108095 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.495159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
W0919 09:55:57.589043  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.589212  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.589302  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.589421  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.589521  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.589606  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.589681  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.589789  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.589877  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.590040  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:55:57.590191  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:55:57.592195  108095 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.668813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.593791  108095 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 09:55:57.593843  108095 factory.go:321] Registering predicate: PredicateOne
I0919 09:55:57.593854  108095 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 09:55:57.593861  108095 factory.go:321] Registering predicate: PredicateTwo
I0919 09:55:57.593865  108095 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 09:55:57.593871  108095 factory.go:336] Registering priority: PriorityOne
I0919 09:55:57.593880  108095 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 09:55:57.593893  108095 factory.go:336] Registering priority: PriorityTwo
I0919 09:55:57.593899  108095 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 09:55:57.593906  108095 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 09:55:57.596774  108095 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.339386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
W0919 09:55:57.597089  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:55:57.599205  108095 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (1.468804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.599510  108095 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 09:55:57.599538  108095 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 09:55:57.599550  108095 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 09:55:57.599555  108095 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 09:55:57.602107  108095 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.808782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
W0919 09:55:57.602521  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:55:57.604027  108095 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (1.124447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.604350  108095 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 09:55:57.604381  108095 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 09:55:57.606575  108095 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.77505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
W0919 09:55:57.607001  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:55:57.608740  108095 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (1.305256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.609307  108095 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 09:55:57.609349  108095 factory.go:321] Registering predicate: PredicateOne
I0919 09:55:57.609359  108095 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 09:55:57.609367  108095 factory.go:321] Registering predicate: PredicateTwo
I0919 09:55:57.609373  108095 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 09:55:57.609379  108095 factory.go:336] Registering priority: PriorityOne
I0919 09:55:57.609387  108095 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 09:55:57.609398  108095 factory.go:336] Registering priority: PriorityTwo
I0919 09:55:57.609404  108095 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 09:55:57.609412  108095 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 09:55:57.613825  108095 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (3.930357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
W0919 09:55:57.614275  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:55:57.616513  108095 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (1.678832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.616887  108095 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 09:55:57.616926  108095 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 09:55:57.617030  108095 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 09:55:57.617050  108095 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 09:55:57.786151  108095 request.go:538] Throttling request took 168.569695ms, request: POST:http://127.0.0.1:33407/api/v1/namespaces/kube-system/configmaps
I0919 09:55:57.789472  108095 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.974741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
W0919 09:55:57.789834  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:55:57.986151  108095 request.go:538] Throttling request took 196.019195ms, request: GET:http://127.0.0.1:33407/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I0919 09:55:57.988341  108095 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (1.829925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:57.988921  108095 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 09:55:57.988978  108095 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 09:55:58.186147  108095 request.go:538] Throttling request took 196.803952ms, request: DELETE:http://127.0.0.1:33407/api/v1/nodes
I0919 09:55:58.188545  108095 httplog.go:90] DELETE /api/v1/nodes: (2.064613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
I0919 09:55:58.188888  108095 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 09:55:58.190606  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.352404ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38832]
--- FAIL: TestSchedulerCreationFromConfigMap (4.19s)
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-094603.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m20s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0919 09:56:49.321725  108095 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (140.30s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-094603.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0919 09:57:59.541672  108095 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 09:57:59.541823  108095 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 09:57:59.541870  108095 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 09:57:59.541912  108095 master.go:259] Using reconciler: 
I0919 09:57:59.543553  108095 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.543955  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.544133  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.545152  108095 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 09:57:59.545195  108095 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.545254  108095 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 09:57:59.545478  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.545503  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.546729  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.546776  108095 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:57:59.546851  108095 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:57:59.546827  108095 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.547456  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.547604  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.548087  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.548528  108095 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 09:57:59.548570  108095 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.548739  108095 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 09:57:59.548798  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.548815  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.549409  108095 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 09:57:59.549511  108095 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 09:57:59.549575  108095 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.549685  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.549787  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.549804  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.550360  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.550430  108095 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 09:57:59.550481  108095 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 09:57:59.550586  108095 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.550787  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.550838  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.551486  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.551588  108095 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 09:57:59.551663  108095 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 09:57:59.551788  108095 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.552014  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.552049  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.552591  108095 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 09:57:59.552594  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.552618  108095 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 09:57:59.552768  108095 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.553450  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.553498  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.553516  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.554138  108095 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 09:57:59.554187  108095 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 09:57:59.554291  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.554469  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.554489  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.554988  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.555011  108095 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 09:57:59.554990  108095 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 09:57:59.555166  108095 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.555432  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.555462  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.555657  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.556036  108095 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 09:57:59.556109  108095 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 09:57:59.556213  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.556413  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.556445  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.556755  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.556977  108095 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 09:57:59.557132  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.557221  108095 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 09:57:59.557332  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.557357  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.558340  108095 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 09:57:59.558348  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.558373  108095 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 09:57:59.558488  108095 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.558700  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.558731  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.559349  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.559549  108095 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 09:57:59.559582  108095 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 09:57:59.559688  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.559894  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.559924  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.560381  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.560466  108095 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 09:57:59.560495  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.560668  108095 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 09:57:59.560741  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.560766  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.562169  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.582662  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.582715  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.583652  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.583860  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.583880  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.584633  108095 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 09:57:59.584660  108095 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 09:57:59.584713  108095 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 09:57:59.585081  108095 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.585221  108095 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.585803  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.585830  108095 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.586508  108095 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.587041  108095 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.587512  108095 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.587830  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.587919  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.588069  108095 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.588375  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.588791  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.588958  108095 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.589495  108095 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.589706  108095 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.590122  108095 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.590377  108095 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.590845  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.590987  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.591083  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.591156  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.591259  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.591349  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.591474  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.592019  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.592249  108095 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.592715  108095 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.593271  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.593460  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.593663  108095 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.594266  108095 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.594450  108095 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.594953  108095 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.595547  108095 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.596009  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.596484  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.596676  108095 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.596780  108095 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 09:57:59.596800  108095 master.go:461] Enabling API group "authentication.k8s.io".
I0919 09:57:59.596811  108095 master.go:461] Enabling API group "authorization.k8s.io".
I0919 09:57:59.597006  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.597220  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.597288  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.598070  108095 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:57:59.598144  108095 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:57:59.598222  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.598428  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.598449  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.599049  108095 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:57:59.599082  108095 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:57:59.599110  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.599212  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.599391  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.599412  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.599727  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.599794  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.599821  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.599829  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.599853  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.599871  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.599894  108095 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 09:57:59.599910  108095 master.go:461] Enabling API group "autoscaling".
I0919 09:57:59.599976  108095 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 09:57:59.600086  108095 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.600280  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.600303  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.600985  108095 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 09:57:59.601070  108095 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 09:57:59.601151  108095 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.601350  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.601372  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.601868  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.601903  108095 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 09:57:59.601920  108095 master.go:461] Enabling API group "batch".
I0919 09:57:59.601923  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.602009  108095 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 09:57:59.602015  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.602046  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.602296  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.602317  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.603274  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.603743  108095 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 09:57:59.603957  108095 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 09:57:59.604081  108095 master.go:461] Enabling API group "certificates.k8s.io".
I0919 09:57:59.604363  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.604557  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.604665  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.605210  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.605355  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.606110  108095 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:57:59.606150  108095 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:57:59.606438  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.606898  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.607036  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.607281  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.607879  108095 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 09:57:59.608023  108095 master.go:461] Enabling API group "coordination.k8s.io".
I0919 09:57:59.608107  108095 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 09:57:59.607924  108095 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 09:57:59.608364  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.608700  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.608814  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.609095  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.609635  108095 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:57:59.609657  108095 master.go:461] Enabling API group "extensions".
I0919 09:57:59.609786  108095 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.609922  108095 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:57:59.609971  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.609990  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.610719  108095 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 09:57:59.610869  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.610961  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.611032  108095 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 09:57:59.611272  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.611290  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.611764  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.612392  108095 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 09:57:59.612497  108095 master.go:461] Enabling API group "networking.k8s.io".
I0919 09:57:59.612527  108095 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.612461  108095 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 09:57:59.612994  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.613030  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.613227  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.613555  108095 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 09:57:59.613584  108095 master.go:461] Enabling API group "node.k8s.io".
I0919 09:57:59.613646  108095 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 09:57:59.614016  108095 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.614297  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.614407  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.614410  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.615014  108095 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 09:57:59.615136  108095 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 09:57:59.615156  108095 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.615544  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.615580  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.615796  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.616446  108095 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 09:57:59.616472  108095 master.go:461] Enabling API group "policy".
I0919 09:57:59.616518  108095 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 09:57:59.616507  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.616673  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.616693  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.617389  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.617660  108095 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:57:59.617739  108095 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:57:59.617882  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.618110  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.618143  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.618601  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.618851  108095 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:57:59.619001  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.618920  108095 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:57:59.619446  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.619553  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.619966  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.620861  108095 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:57:59.620922  108095 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:57:59.621008  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.621193  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.621214  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.621644  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.621795  108095 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:57:59.621854  108095 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:57:59.621991  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.622218  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.622240  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.622545  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.623206  108095 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 09:57:59.623238  108095 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 09:57:59.623335  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.623496  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.623526  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.624018  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.624535  108095 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 09:57:59.624588  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.624647  108095 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 09:57:59.624746  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.624772  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.625335  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.625601  108095 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 09:57:59.625648  108095 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 09:57:59.625725  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.625906  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.625921  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.626423  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.626524  108095 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 09:57:59.626552  108095 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 09:57:59.626573  108095 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 09:57:59.627566  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.628381  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.628538  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.628559  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.629210  108095 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:57:59.629244  108095 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:57:59.629332  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.629503  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.629516  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.630275  108095 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 09:57:59.630376  108095 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 09:57:59.630305  108095 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 09:57:59.630508  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.630750  108095 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 09:57:59.631016  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.631153  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.631451  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.631578  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.632264  108095 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:57:59.632405  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.632600  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.632629  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.632712  108095 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:57:59.633453  108095 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:57:59.633492  108095 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.633530  108095 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:57:59.633634  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.633679  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.633699  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.634292  108095 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 09:57:59.634345  108095 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 09:57:59.634342  108095 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.634531  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.634547  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.634571  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.635066  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.635758  108095 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 09:57:59.635810  108095 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 09:57:59.635871  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.636061  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.636083  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.636603  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.636672  108095 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 09:57:59.636756  108095 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 09:57:59.636834  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.637080  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.637103  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.637674  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.637779  108095 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 09:57:59.637795  108095 master.go:461] Enabling API group "storage.k8s.io".
I0919 09:57:59.637812  108095 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 09:57:59.637978  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.638204  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.638235  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.638597  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.638821  108095 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 09:57:59.638842  108095 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 09:57:59.638995  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.639171  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.639190  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.639420  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.639974  108095 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 09:57:59.640020  108095 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 09:57:59.640151  108095 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.640370  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.640403  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.640671  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.641030  108095 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 09:57:59.641208  108095 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 09:57:59.641228  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.641425  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.641452  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.641844  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.642164  108095 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 09:57:59.642190  108095 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 09:57:59.642308  108095 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.642454  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.642472  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.642931  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.643096  108095 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 09:57:59.643116  108095 master.go:461] Enabling API group "apps".
I0919 09:57:59.643139  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.643201  108095 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 09:57:59.643311  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.643328  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.643826  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.643865  108095 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:57:59.643897  108095 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:57:59.643899  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.644109  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.644129  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.644629  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.644749  108095 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:57:59.644862  108095 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:57:59.645053  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.645327  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.645367  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.645664  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.645853  108095 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 09:57:59.645890  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.645912  108095 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 09:57:59.646114  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.646131  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.646861  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.647111  108095 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 09:57:59.647131  108095 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 09:57:59.647167  108095 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.647272  108095 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 09:57:59.647455  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:57:59.647482  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:57:59.648128  108095 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 09:57:59.648162  108095 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 09:57:59.648177  108095 master.go:461] Enabling API group "events.k8s.io".
I0919 09:57:59.648397  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.648405  108095 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.648605  108095 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.648698  108095 watch_cache.go:405] Replace watchCache (rev: 59807) 
I0919 09:57:59.648879  108095 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.649020  108095 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.649133  108095 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.649205  108095 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.649339  108095 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.649416  108095 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.649487  108095 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.649550  108095 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.650455  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.650771  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.651516  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.651852  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.652521  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.652853  108095 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.653571  108095 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.653847  108095 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.654457  108095 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.654726  108095 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:57:59.654876  108095 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 09:57:59.655488  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.655705  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.656065  108095 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.656692  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.657392  108095 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.658098  108095 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.658438  108095 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.659181  108095 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.659841  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.660132  108095 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.660755  108095 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:57:59.660892  108095 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 09:57:59.661526  108095 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.661820  108095 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.662353  108095 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
E0919 09:57:59.662354  108095 factory.go:590] Error getting pod permit-plugin5052de6b-1963-44f6-970a-8702b6b1a0b9/test-pod for retry: Get http://127.0.0.1:35645/api/v1/namespaces/permit-plugin5052de6b-1963-44f6-970a-8702b6b1a0b9/pods/test-pod: dial tcp 127.0.0.1:35645: connect: connection refused; retrying...
I0919 09:57:59.663052  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.663515  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.664104  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.664652  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.665214  108095 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.665671  108095 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.666319  108095 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.666853  108095 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:57:59.667027  108095 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 09:57:59.667748  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.668371  108095 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:57:59.668538  108095 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 09:57:59.669109  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.669747  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.670039  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.670598  108095 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.671094  108095 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.671539  108095 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.672033  108095 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:57:59.672187  108095 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 09:57:59.672823  108095 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.673471  108095 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.673774  108095 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.674398  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.674683  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.675032  108095 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.675652  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.675999  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.676271  108095 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.676860  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.677123  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.677348  108095 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 09:57:59.677432  108095 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 09:57:59.677476  108095 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 09:57:59.677999  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.678579  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.679295  108095 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.679820  108095 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.680462  108095 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 09:57:59.683235  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:57:59.683267  108095 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 09:57:59.683277  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:57:59.683287  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:57:59.683297  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:57:59.683304  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:57:59.683351  108095 httplog.go:90] GET /healthz: (231.34µs) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:57:59.684707  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.530376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.687096  108095 httplog.go:90] GET /api/v1/services: (1.168942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.690450  108095 httplog.go:90] GET /api/v1/services: (839.508µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.692931  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:57:59.693074  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:57:59.693133  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:57:59.693181  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:57:59.693224  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:57:59.693322  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (904.828µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.693412  108095 httplog.go:90] GET /healthz: (696.353µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:57:59.694517  108095 httplog.go:90] GET /api/v1/services: (951.029µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.694644  108095 httplog.go:90] GET /api/v1/services: (905.868µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35822]
I0919 09:57:59.696126  108095 httplog.go:90] POST /api/v1/namespaces: (1.931287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35820]
I0919 09:57:59.697395  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (816.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.699331  108095 httplog.go:90] POST /api/v1/namespaces: (1.54779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.700478  108095 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (879.047µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.702007  108095 httplog.go:90] POST /api/v1/namespaces: (1.212844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.784253  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:57:59.784425  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:57:59.784474  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:57:59.784563  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:57:59.784589  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:57:59.784774  108095 httplog.go:90] GET /healthz: (660.461µs) 0 [Go-http-client/1.1 127.0.0.1:35818]
I0919 09:57:59.794158  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:57:59.794313  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:57:59.794356  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:57:59.794395  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:57:59.794431  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:57:59.794584  108095 httplog.go:90] GET /healthz: (557.377µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.884195  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:57:59.884248  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:57:59.884261  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:57:59.884272  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:57:59.884281  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:57:59.884326  108095 httplog.go:90] GET /healthz: (280.026µs) 0 [Go-http-client/1.1 127.0.0.1:35818]
I0919 09:57:59.894194  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:57:59.894232  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:57:59.894241  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:57:59.894248  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:57:59.894254  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:57:59.894287  108095 httplog.go:90] GET /healthz: (252.037µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.984152  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:57:59.984380  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:57:59.984443  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:57:59.984468  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:57:59.984504  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:57:59.984604  108095 httplog.go:90] GET /healthz: (602.71µs) 0 [Go-http-client/1.1 127.0.0.1:35818]
I0919 09:57:59.994175  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:57:59.994237  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:57:59.994249  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:57:59.994258  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:57:59.994267  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:57:59.994305  108095 httplog.go:90] GET /healthz: (284.14µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:57:59.997464  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.999602  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.999673  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.999770  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:57:59.999828  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.000582  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.084160  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.084225  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.084238  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.084246  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.084254  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.084296  108095 httplog.go:90] GET /healthz: (290.747µs) 0 [Go-http-client/1.1 127.0.0.1:35818]
I0919 09:58:00.094197  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.094235  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.094245  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.094251  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.094257  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.094290  108095 httplog.go:90] GET /healthz: (285.812µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:58:00.101484  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.101485  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.101818  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.101824  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.102005  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.103885  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.184226  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.184267  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.184278  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.184287  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.184295  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.184341  108095 httplog.go:90] GET /healthz: (278.387µs) 0 [Go-http-client/1.1 127.0.0.1:35818]
I0919 09:58:00.194194  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.194262  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.194274  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.194283  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.194290  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.194334  108095 httplog.go:90] GET /healthz: (282.729µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:58:00.201310  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.284228  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.284269  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.284278  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.284285  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.284290  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.284325  108095 httplog.go:90] GET /healthz: (285.808µs) 0 [Go-http-client/1.1 127.0.0.1:35818]
I0919 09:58:00.294262  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.294442  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.294490  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.294525  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.294583  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.294687  108095 httplog.go:90] GET /healthz: (621.562µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:58:00.309872  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.384195  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.384236  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.384246  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.384252  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.384258  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.384295  108095 httplog.go:90] GET /healthz: (251.821µs) 0 [Go-http-client/1.1 127.0.0.1:35818]
I0919 09:58:00.394103  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.394137  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.394146  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.394152  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.394159  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.394188  108095 httplog.go:90] GET /healthz: (207.572µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:58:00.484233  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.484384  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.484412  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.484454  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.484479  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.484577  108095 httplog.go:90] GET /healthz: (544.736µs) 0 [Go-http-client/1.1 127.0.0.1:35818]
I0919 09:58:00.494167  108095 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 09:58:00.494345  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.494373  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.494395  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.494418  108095 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.494545  108095 httplog.go:90] GET /healthz: (539.413µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:58:00.541763  108095 client.go:361] parsed scheme: "endpoint"
I0919 09:58:00.542010  108095 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 09:58:00.585247  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.585300  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.585309  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.585315  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.585367  108095 httplog.go:90] GET /healthz: (1.373251ms) 0 [Go-http-client/1.1 127.0.0.1:35818]
I0919 09:58:00.595308  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.595362  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.595371  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.595377  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.595419  108095 httplog.go:90] GET /healthz: (1.414554ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:58:00.599910  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.599970  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.600002  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.600005  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.600057  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.602159  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.605000  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.685008  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.653005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:00.685086  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.685104  108095 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 09:58:00.685110  108095 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 09:58:00.685116  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 09:58:00.685140  108095 httplog.go:90] GET /healthz: (1.13856ms) 0 [Go-http-client/1.1 127.0.0.1:35828]
I0919 09:58:00.685457  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.820637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.686430  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (921.619µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35828]
I0919 09:58:00.686648  108095 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (975.077µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:58:00.687432  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (746.67µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35828]
I0919 09:58:00.687464  108095 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.230099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.688835  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (758.423µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35828]
I0919 09:58:00.688835  108095 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.642957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35818]
I0919 09:58:00.689100  108095 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 09:58:00.689709  108095 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.669444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.690119  108095 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (848.178µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:00.690122  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (942.697µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35828]
I0919 09:58:00.691250  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (745.903µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.691654  108095 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.17707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:00.691879  108095 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 09:58:00.691926  108095 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 09:58:00.692656  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.097084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.693578  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (596.764µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.694513  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (620.975µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:00.694562  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.694578  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:00.694601  108095 httplog.go:90] GET /healthz: (783.931µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.695435  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (608.403µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.697239  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.364648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.697475  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 09:58:00.698359  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (719.676µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.699925  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.209103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.700099  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 09:58:00.701084  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (787.899µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.702641  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.169868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.702823  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 09:58:00.703621  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (622.279µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.705125  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.156091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.705346  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:58:00.706318  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (808.849µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.707896  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.218619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.708103  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 09:58:00.709015  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (726.985µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.710618  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.185308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.710885  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 09:58:00.711758  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (660.58µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.713338  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.217552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.713537  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 09:58:00.714481  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (695.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.716157  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.309412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.716316  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 09:58:00.717284  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (788.212µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.719174  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.460293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.719467  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 09:58:00.720401  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (741.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.722284  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.478828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.722554  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 09:58:00.723473  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (725.899µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.724897  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.048324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.725181  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 09:58:00.726030  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (690.957µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.727786  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.386956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.728100  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 09:58:00.728961  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (653.706µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.730469  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.208561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.730652  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 09:58:00.731689  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (807.742µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.733244  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.162662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.733447  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 09:58:00.734325  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (709.464µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.735815  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.147552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.736003  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 09:58:00.736831  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (634.002µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.738389  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.201145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.738714  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 09:58:00.739754  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (822.299µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.741394  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.18619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.741709  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 09:58:00.742729  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (821.947µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.744217  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.136676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.744414  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:58:00.745167  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (571.733µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.746523  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (985.532µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.746769  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 09:58:00.747836  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (856.953µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.749463  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.16606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.749741  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 09:58:00.750892  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (836.107µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.753138  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.641308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.753332  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 09:58:00.754277  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (670.897µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.755872  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.151418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.756097  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 09:58:00.757143  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (728.187µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.758616  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.072897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.758928  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 09:58:00.759821  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (690.865µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.761530  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.328373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.761773  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:58:00.762778  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (806.754µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.764689  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.3711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.765009  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 09:58:00.765905  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (640.998µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.767598  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.225415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.767986  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:58:00.768792  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (647.216µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.770464  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.174078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.770778  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 09:58:00.771640  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (664.374µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.773324  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.234216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.773559  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:58:00.774564  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (786.734µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.776148  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.221065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.776386  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:58:00.777486  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (840.852µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.779305  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.405977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.779501  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:58:00.780412  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (726.65µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.782069  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.281882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.782240  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:58:00.783034  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (605.19µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.784572  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.195496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.784885  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:58:00.784996  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.785100  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:00.785188  108095 httplog.go:90] GET /healthz: (1.333064ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:00.785832  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (648.258µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.787596  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.787871  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:58:00.788665  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (598.557µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.790246  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.129927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.790470  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:58:00.791287  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (642.009µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.792821  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.194531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.793128  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:58:00.794080  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (761.245µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.794488  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.794560  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:00.794728  108095 httplog.go:90] GET /healthz: (830.798µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:00.795762  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.303143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.796067  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:58:00.797015  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (693.096µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.798602  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.162802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.798876  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:58:00.799883  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (718.022µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.801441  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.13701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.801703  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:58:00.802948  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (997.865µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.804617  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.246772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.804959  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:58:00.805779  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (622.732µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.807568  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.807850  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:58:00.808833  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (764.01µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.810499  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.265873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.810745  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:58:00.811783  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (791.635µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.813441  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.272305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.813711  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:58:00.814678  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (773.111µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.816320  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.253806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.816586  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:58:00.817578  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (804.365µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.819287  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.292863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.819548  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:58:00.820621  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (788.611µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.822222  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.216565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.822442  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:58:00.823316  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (717.411µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.824807  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.160204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.825119  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:58:00.826088  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (730.333µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.827721  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.828047  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:58:00.828885  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (623.629µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.830440  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.125403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.830718  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:58:00.831654  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (672.73µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.833165  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.157639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.833323  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:58:00.844339  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (976.037µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.865535  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.129667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.865803  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:58:00.884819  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.34757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.884870  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.884894  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:00.884926  108095 httplog.go:90] GET /healthz: (1.011284ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:00.894997  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.895032  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:00.895074  108095 httplog.go:90] GET /healthz: (1.078097ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.905580  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.154336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.905901  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:58:00.924928  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.558105ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.945458  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.05044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.945902  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:58:00.964827  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.356084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.985214  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.985249  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:00.985288  108095 httplog.go:90] GET /healthz: (1.258523ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:00.985501  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.024019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.985702  108095 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:58:00.995100  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:00.995137  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:00.995369  108095 httplog.go:90] GET /healthz: (1.215247ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:00.997629  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.999856  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.999889  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:00.999891  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.000009  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.000821  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.004674  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.298454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.025555  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.123888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.025771  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 09:58:01.045197  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.746983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.065778  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.322522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.066050  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 09:58:01.084796  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.373428ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.084876  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.084897  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.084927  108095 httplog.go:90] GET /healthz: (1.016496ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:01.095075  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.095103  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.095149  108095 httplog.go:90] GET /healthz: (1.098985ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.101823  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.101823  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.102066  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.102066  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.102224  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.104102  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.105189  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.784261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.105448  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 09:58:01.125153  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.607335ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.145856  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.382989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.146340  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 09:58:01.164998  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.551188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.184880  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.184914  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.184972  108095 httplog.go:90] GET /healthz: (1.096076ms) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:01.185997  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.566411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.186273  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 09:58:01.194865  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.194913  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.194978  108095 httplog.go:90] GET /healthz: (1.037763ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.201497  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.204754  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.360197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.225281  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.899102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.225562  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 09:58:01.244900  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.491029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.265031  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.695472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.265297  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 09:58:01.284776  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.284819  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.284851  108095 httplog.go:90] GET /healthz: (928.531µs) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:01.284777  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.396592ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.295020  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.295050  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.295093  108095 httplog.go:90] GET /healthz: (1.11002ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.305397  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.035085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.305650  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 09:58:01.310056  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.324919  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.500554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.345250  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.856158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.345651  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 09:58:01.365090  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.612954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.385308  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.385345  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.385393  108095 httplog.go:90] GET /healthz: (1.395541ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:01.385713  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.313969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.386031  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 09:58:01.395291  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.395336  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.395407  108095 httplog.go:90] GET /healthz: (1.109982ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.404855  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.497829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.425777  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.285806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.426076  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 09:58:01.444901  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.431418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.465725  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.247625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.466198  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 09:58:01.485201  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.485232  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.485268  108095 httplog.go:90] GET /healthz: (1.277276ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:01.485483  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.982348ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.494931  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.494995  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.495044  108095 httplog.go:90] GET /healthz: (1.096992ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.505346  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.952113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.505593  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 09:58:01.524816  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.372735ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.545665  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.245409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.546009  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 09:58:01.564848  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.391486ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.585046  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.585293  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.585507  108095 httplog.go:90] GET /healthz: (1.585544ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:01.585507  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.025514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.585873  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 09:58:01.595081  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.595114  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.595157  108095 httplog.go:90] GET /healthz: (1.12449ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.600110  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.600147  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.600189  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.600213  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.600216  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.602331  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.604564  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.230314ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.605165  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:01.625650  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.189468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.625900  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 09:58:01.644754  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.372691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.665734  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.293187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.666051  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 09:58:01.685016  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.550941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.685070  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.685095  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.685130  108095 httplog.go:90] GET /healthz: (1.036046ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:01.694984  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.695015  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.695063  108095 httplog.go:90] GET /healthz: (1.071071ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.705441  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.047436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.705678  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 09:58:01.725168  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.638809ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.745483  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.113451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.745895  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 09:58:01.764892  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.473845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.785478  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.044542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:01.785766  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.785827  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.785949  108095 httplog.go:90] GET /healthz: (2.00873ms) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:01.785976  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 09:58:01.795084  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.795121  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.795222  108095 httplog.go:90] GET /healthz: (1.218368ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.805051  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.535754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.825800  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.363919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.826102  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 09:58:01.844921  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.442856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.865782  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.314222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.866315  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 09:58:01.884914  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.884977  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.885005  108095 httplog.go:90] GET /healthz: (934.427µs) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:01.884928  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.447812ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.895119  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.895364  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.895635  108095 httplog.go:90] GET /healthz: (1.602033ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.905620  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.242564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.906036  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 09:58:01.924506  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.121817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.945439  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.973275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.945835  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 09:58:01.964842  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.446797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.985603  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.985624  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.151931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.985647  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.985718  108095 httplog.go:90] GET /healthz: (1.725584ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:01.986033  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 09:58:01.995240  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:01.995277  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:01.995329  108095 httplog.go:90] GET /healthz: (1.229232ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:01.997829  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.000011  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.000130  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.000031  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.000048  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.001026  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.005130  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.688668ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.025561  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.107574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.025829  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 09:58:02.044840  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.338015ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.065555  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.129722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.066050  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 09:58:02.084916  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.48038ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.085246  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.085280  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.085309  108095 httplog.go:90] GET /healthz: (1.328311ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:02.095115  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.095308  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.095480  108095 httplog.go:90] GET /healthz: (1.462923ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.102044  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.102087  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.102230  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.102233  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.102397  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.104303  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.105390  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.028293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.105705  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 09:58:02.124794  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.327403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.145716  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.293341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.146066  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 09:58:02.164994  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.538382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.185152  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.185183  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.185260  108095 httplog.go:90] GET /healthz: (1.25565ms) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:02.185900  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.401028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.186270  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 09:58:02.195178  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.195206  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.195247  108095 httplog.go:90] GET /healthz: (1.123086ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.201715  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.204762  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.397791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.225354  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.979017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.225620  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 09:58:02.245205  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.583567ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.265559  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.182947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.265880  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 09:58:02.284932  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.285026  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.285069  108095 httplog.go:90] GET /healthz: (1.209304ms) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:02.285116  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.592464ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.295206  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.295245  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.295320  108095 httplog.go:90] GET /healthz: (1.267044ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.305375  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.961781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.305677  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 09:58:02.310297  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.325013  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.514168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.345677  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.23781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.345918  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 09:58:02.364985  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.453842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.385109  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.385303  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.385453  108095 httplog.go:90] GET /healthz: (1.542143ms) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:02.385504  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.057179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.385713  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 09:58:02.394918  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.394966  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.395011  108095 httplog.go:90] GET /healthz: (974.154µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.404427  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.120595ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.425364  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.905989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.425668  108095 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 09:58:02.444454  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.088378ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.446143  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.256919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.465350  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.036493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.465721  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 09:58:02.484886  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.484928  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.485005  108095 httplog.go:90] GET /healthz: (978.488µs) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:02.485166  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.792393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.487008  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.41116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.495186  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.495368  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.495505  108095 httplog.go:90] GET /healthz: (1.441965ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.505480  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.096003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.505733  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:58:02.524788  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.449526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.526604  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.228335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.546344  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.95525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.546682  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:58:02.564831  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.387779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.566903  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.521961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.585029  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.585218  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.585382  108095 httplog.go:90] GET /healthz: (1.390507ms) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:02.585537  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.043349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.585783  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:58:02.595166  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.595202  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.595250  108095 httplog.go:90] GET /healthz: (1.220918ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.600309  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.600386  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.600411  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.600415  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.600427  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.602536  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.604833  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.38147ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.605337  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:02.606990  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.573402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.625572  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.078535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.625984  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:58:02.645128  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.648356ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.647352  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.467173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.665825  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.434761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.666166  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:58:02.684850  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.685073  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.684969  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.462551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.685273  108095 httplog.go:90] GET /healthz: (1.352798ms) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:02.687045  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.31015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.694930  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.694984  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.695025  108095 httplog.go:90] GET /healthz: (1.014751ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.705736  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.331198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.706088  108095 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:58:02.725023  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.561831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.726788  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.217579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.746122  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.58454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.746507  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 09:58:02.765175  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.64431ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.767404  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.431632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.785254  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.785533  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.785566  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.095963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:02.785771  108095 httplog.go:90] GET /healthz: (1.710876ms) 0 [Go-http-client/1.1 127.0.0.1:35816]
I0919 09:58:02.786042  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 09:58:02.792569  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.671694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46632]
I0919 09:58:02.794440  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.42787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46632]
I0919 09:58:02.794590  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.794607  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.794630  108095 httplog.go:90] GET /healthz: (736.986µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.795908  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.023292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46632]
I0919 09:58:02.804554  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.194436ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.806338  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.228205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.825885  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.433701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.826271  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 09:58:02.844856  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.401424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.846640  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.25637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.865683  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.100752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.866275  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 09:58:02.884807  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.884840  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.884879  108095 httplog.go:90] GET /healthz: (882.218µs) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:02.885076  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.561757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.886889  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.266628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.895085  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.895119  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.895191  108095 httplog.go:90] GET /healthz: (1.174787ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.905569  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.115898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.906010  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 09:58:02.924761  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.333734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.926785  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.354222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.945685  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.212225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.946166  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 09:58:02.964779  108095 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.328856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.966735  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.438879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.984829  108095 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 09:58:02.985092  108095 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 09:58:02.985382  108095 httplog.go:90] GET /healthz: (1.373067ms) 0 [Go-http-client/1.1 127.0.0.1:35826]
I0919 09:58:02.985533  108095 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.966486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.985737  108095 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 09:58:02.995007  108095 httplog.go:90] GET /healthz: (970.231µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:02.998699  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.000256  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.000262  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.000309  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.000309  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.000710  108095 httplog.go:90] GET /api/v1/namespaces/default: (5.23029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:03.001309  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.002923  108095 httplog.go:90] POST /api/v1/namespaces: (1.637286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:03.004255  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (882.615µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:03.008279  108095 httplog.go:90] POST /api/v1/namespaces/default/services: (3.482305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:03.009788  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (934.811µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:03.012308  108095 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.884536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:03.085127  108095 httplog.go:90] GET /healthz: (1.099722ms) 200 [Go-http-client/1.1 127.0.0.1:35816]
W0919 09:58:03.086381  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086420  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086454  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086463  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086489  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086498  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086509  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086516  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086528  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086539  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086546  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.086596  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:58:03.086615  108095 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 09:58:03.086624  108095 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 09:58:03.086799  108095 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 09:58:03.087055  108095 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 09:58:03.087244  108095 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 09:58:03.088544  108095 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (718.302µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35816]
I0919 09:58:03.089494  108095 get.go:251] Starting watch for /api/v1/pods, rv=59807 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m1s
I0919 09:58:03.102437  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.102460  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.102552  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.102572  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.102563  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.104470  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.186992  108095 shared_informer.go:227] caches populated
I0919 09:58:03.187031  108095 shared_informer.go:204] Caches are synced for scheduler 
I0919 09:58:03.187297  108095 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.187320  108095 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.187695  108095 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.187718  108095 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.187739  108095 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.187750  108095 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.188118  108095 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.188153  108095 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.188172  108095 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (486.577µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:03.188194  108095 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.188209  108095 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.187855  108095 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.188257  108095 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.188278  108095 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.188572  108095 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.188589  108095 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.188882  108095 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (509.207µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35832]
I0919 09:58:03.189116  108095 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (429.864µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35840]
I0919 09:58:03.189334  108095 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.189456  108095 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.189543  108095 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (545.753µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:03.188262  108095 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.189721  108095 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59807 labels= fields= timeout=6m9s
I0919 09:58:03.189436  108095 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59807 labels= fields= timeout=7m30s
I0919 09:58:03.190301  108095 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59807 labels= fields= timeout=8m3s
I0919 09:58:03.190325  108095 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (353.357µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35842]
I0919 09:58:03.190407  108095 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (379.683µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35840]
I0919 09:58:03.190311  108095 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (1.282981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35834]
I0919 09:58:03.190690  108095 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (291.2µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35826]
I0919 09:58:03.190764  108095 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59807 labels= fields= timeout=9m30s
I0919 09:58:03.191112  108095 get.go:251] Starting watch for /api/v1/nodes, rv=59807 labels= fields= timeout=9m16s
I0919 09:58:03.191230  108095 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (2.047645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35838]
I0919 09:58:03.191365  108095 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59807 labels= fields= timeout=8m55s
I0919 09:58:03.191661  108095 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59807 labels= fields= timeout=6m45s
I0919 09:58:03.191843  108095 get.go:251] Starting watch for /api/v1/services, rv=59921 labels= fields= timeout=5m58s
I0919 09:58:03.191742  108095 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59807 labels= fields= timeout=7m13s
I0919 09:58:03.192134  108095 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.192148  108095 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.192796  108095 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (448.789µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35848]
I0919 09:58:03.193611  108095 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59807 labels= fields= timeout=8m42s
I0919 09:58:03.201915  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.287286  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287331  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287340  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287349  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287357  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287364  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287371  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287379  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287401  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287410  108095 shared_informer.go:227] caches populated
I0919 09:58:03.287422  108095 shared_informer.go:227] caches populated
I0919 09:58:03.290500  108095 httplog.go:90] POST /api/v1/namespaces: (2.313197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35852]
I0919 09:58:03.290882  108095 node_lifecycle_controller.go:327] Sending events to api server.
I0919 09:58:03.290962  108095 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0919 09:58:03.291063  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:58:03.291272  108095 taint_manager.go:162] Sending events to api server.
I0919 09:58:03.291463  108095 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0919 09:58:03.291569  108095 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0919 09:58:03.291642  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 09:58:03.291729  108095 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 09:58:03.291834  108095 node_lifecycle_controller.go:488] Starting node controller
I0919 09:58:03.291871  108095 shared_informer.go:197] Waiting for caches to sync for taint
I0919 09:58:03.292069  108095 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.292090  108095 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.293063  108095 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (693.161µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35852]
I0919 09:58:03.293912  108095 get.go:251] Starting watch for /api/v1/namespaces, rv=59923 labels= fields= timeout=7m14s
I0919 09:58:03.310495  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.392200  108095 shared_informer.go:227] caches populated
I0919 09:58:03.392519  108095 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.392640  108095 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.392554  108095 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.392840  108095 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.392616  108095 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.393033  108095 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0919 09:58:03.394032  108095 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (618.413µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35856]
I0919 09:58:03.394032  108095 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (618.433µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35854]
I0919 09:58:03.394161  108095 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (440.247µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35860]
I0919 09:58:03.394631  108095 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59807 labels= fields= timeout=9m3s
I0919 09:58:03.394753  108095 get.go:251] Starting watch for /api/v1/pods, rv=59807 labels= fields= timeout=8m9s
I0919 09:58:03.395274  108095 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59807 labels= fields= timeout=9m26s
I0919 09:58:03.429639  108095 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0919 09:58:03.429692  108095 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0919 09:58:03.429722  108095 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0919 09:58:03.429727  108095 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0919 09:58:03.429737  108095 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0919 09:58:03.429743  108095 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0919 09:58:03.429825  108095 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"7ea07fc6-04d4-409f-bb59-3b784ef46639", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0919 09:58:03.429865  108095 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"1e17a788-0585-42e9-a109-9d5f9668d683", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0919 09:58:03.429873  108095 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"ddcd488c-74af-4b90-8305-7f626566db9c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0919 09:58:03.432882  108095 httplog.go:90] POST /api/v1/namespaces/default/events: (2.539903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:03.434969  108095 httplog.go:90] POST /api/v1/namespaces/default/events: (1.55372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:03.436759  108095 httplog.go:90] POST /api/v1/namespaces/default/events: (1.446894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:03.492053  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492096  108095 shared_informer.go:204] Caches are synced for taint 
I0919 09:58:03.492146  108095 taint_manager.go:186] Starting NoExecuteTaintManager
I0919 09:58:03.492458  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492575  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492606  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492659  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492687  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492707  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492746  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492785  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492851  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492888  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492924  108095 shared_informer.go:227] caches populated
I0919 09:58:03.492990  108095 shared_informer.go:227] caches populated
I0919 09:58:03.493023  108095 shared_informer.go:227] caches populated
I0919 09:58:03.496275  108095 httplog.go:90] POST /api/v1/nodes: (2.452209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.496767  108095 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0919 09:58:03.496782  108095 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 09:58:03.496923  108095 taint_manager.go:438] Updating known taints on node node-0: []
I0919 09:58:03.498758  108095 httplog.go:90] POST /api/v1/nodes: (1.875959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.498994  108095 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0919 09:58:03.499018  108095 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 09:58:03.499034  108095 taint_manager.go:438] Updating known taints on node node-1: []
I0919 09:58:03.502031  108095 httplog.go:90] POST /api/v1/nodes: (2.713459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.503332  108095 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0919 09:58:03.503389  108095 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 09:58:03.503409  108095 taint_manager.go:438] Updating known taints on node node-2: []
I0919 09:58:03.505190  108095 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/pods: (1.674288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.505461  108095 scheduling_queue.go:830] About to try and schedule pod taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2
I0919 09:58:03.505481  108095 scheduler.go:530] Attempting to schedule pod: taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2
I0919 09:58:03.505576  108095 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34", Name:"testpod-2"}
I0919 09:58:03.505709  108095 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2", node "node-1"
I0919 09:58:03.505728  108095 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2", node "node-1": all PVCs bound and nothing to do
I0919 09:58:03.505768  108095 factory.go:606] Attempting to bind testpod-2 to node-1
I0919 09:58:03.507500  108095 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/pods/testpod-2/binding: (1.499374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.507717  108095 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34", Name:"testpod-2"}
I0919 09:58:03.507868  108095 scheduler.go:662] pod taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2 is bound successfully on node "node-1", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0919 09:58:03.509825  108095 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/events: (1.554843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.600509  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.600553  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.600519  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.600582  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.600648  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.602833  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.605647  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:03.607869  108095 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/pods/testpod-2: (1.803368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.609776  108095 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/pods/testpod-2: (1.419424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.611460  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.173583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.613835  108095 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.860401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.614836  108095 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (466.242µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.617313  108095 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.8329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.617661  108095 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:58:03.614207742 +0000 UTC m=+337.480282609,}] Taint to Node node-1
I0919 09:58:03.617805  108095 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0919 09:58:03.716767  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.988904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.819471  108095 httplog.go:90] GET /api/v1/nodes/node-1: (4.848316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.916300  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.716678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:03.998801  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.000455  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.000466  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.000497  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.000503  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.001500  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.016376  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.764923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.102762  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.102868  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.103027  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.102779  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.103113  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.104728  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.116162  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.617618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.189082  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.190974  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.191247  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.191667  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.192077  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.193183  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.202161  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.216459  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.82273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.310711  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.316751  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.131732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.394589  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.416338  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.721441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.515925  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.391899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.600710  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.600737  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.600746  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.600709  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.600768  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.603024  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.605950  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:04.616505  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.963154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.716455  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.816144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.816660  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.02461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.916331  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.71786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:04.998993  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.000687  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.000720  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.000741  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.000752  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.001695  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.016698  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.017952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.102982  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.103197  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.103258  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.102987  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.103175  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.105100  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.116385  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.772917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.189278  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.191192  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.191436  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.191852  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.192270  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.193391  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.202437  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.217290  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.781524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.311026  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.316174  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.585875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.394722  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.416395  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.737768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.516493  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.868813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.600878  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.600902  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.600955  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.600958  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.600892  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.603246  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.606124  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:05.616514  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.940507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.716450  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.755457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.816388  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.713207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.916480  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.862213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:05.999210  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.000893  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.000922  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.000928  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.000895  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.001882  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.016622  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.945079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.103407  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.103451  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.103415  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.103427  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.103439  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.105454  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.116417  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.813506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.189453  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.191352  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.191606  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.192182  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.192483  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.193546  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.202692  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.216350  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.777444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.311218  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.316452  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.815571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.395034  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.416410  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.826603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.516624  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.897555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.589522  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.57366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32858]
I0919 09:58:06.591372  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.363147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32858]
I0919 09:58:06.593171  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.302555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32858]
I0919 09:58:06.601103  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.601143  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.601114  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.601227  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.601372  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.603486  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.606417  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:06.616328  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.65323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.716633  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.971627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.816950  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.123151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.916561  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.957747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:06.999565  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.001152  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.001164  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.001169  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.001175  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.002099  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.016609  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.958254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.103619  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.103643  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.103652  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.103675  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.103619  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.105642  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.116473  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.870309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.189652  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.191560  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.191791  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.192375  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.192665  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.193695  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.202891  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.216447  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.787056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.311559  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.316568  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.966192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.395535  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.416185  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.577437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.516538  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.865961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.601323  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.601323  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.601323  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.601384  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.601504  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.603662  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.606600  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:07.616673  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.053386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.716448  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.771303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.816503  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.769592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.903182  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.830284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:07.905037  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.161467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:07.906559  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.11017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:07.915954  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.399832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:07.999994  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.001506  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.001591  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.001683  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.001765  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.002306  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.016398  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.789671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.103723  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.103811  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.103822  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.103828  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.103841  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.105807  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.116373  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.755437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.189887  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.191770  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.191989  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.192519  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.192909  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.193851  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.203057  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.216345  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.753264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.311836  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.316200  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.607379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.395745  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.416572  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.922869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.492324  108095 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0919 09:58:08.492503  108095 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0919 09:58:08.492610  108095 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0919 09:58:08.492695  108095 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0919 09:58:08.492734  108095 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0919 09:58:08.492720  108095 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"06fe35ea-1c02-408a-b043-37523639bf6c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0919 09:58:08.492772  108095 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0919 09:58:08.492839  108095 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
I0919 09:58:08.492840  108095 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"22b7d560-dac9-4bfb-becb-0ab20a604ced", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0919 09:58:08.493015  108095 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"c07210e8-66c9-43ee-b42f-1d3bc76141ab", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
W0919 09:58:08.492909  108095 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0919 09:58:08.493291  108095 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
I0919 09:58:08.493319  108095 node_lifecycle_controller.go:770] Node node-1 is NotReady as of 2019-09-19 09:58:08.493305134 +0000 UTC m=+342.359379999. Adding it to the Taint queue.
W0919 09:58:08.493342  108095 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0919 09:58:08.493395  108095 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0919 09:58:08.495829  108095 httplog.go:90] POST /api/v1/namespaces/default/events: (2.268838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.498195  108095 httplog.go:90] POST /api/v1/namespaces/default/events: (1.850948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.500176  108095 httplog.go:90] POST /api/v1/namespaces/default/events: (1.409125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.501412  108095 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (532.596µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.504296  108095 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.969583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.504649  108095 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-19 09:58:08.500657188 +0000 UTC m=+342.366732071,}] Taint to Node node-1
I0919 09:58:08.504767  108095 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 09:58:08.504666  108095 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 09:58:08.504881  108095 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 09:58:08 +0000 UTC}]
I0919 09:58:08.504982  108095 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2 at 2019-09-19 09:58:08.504969694 +0000 UTC m=+342.371044572 to be fired at 2019-09-19 09:58:08.504969694 +0000 UTC m=+342.371044572
I0919 09:58:08.505029  108095 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2
I0919 09:58:08.505398  108095 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2
I0919 09:58:08.506774  108095 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/events: (1.14245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35864]
I0919 09:58:08.507800  108095 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/pods/testpod-2: (2.542722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.515581  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.022097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.601534  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.601534  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.601599  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.601557  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.601654  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.603841  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.606813  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:08.616571  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.847853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.716560  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.910108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.816529  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.871257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:08.916308  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.651115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.000300  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.001724  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.001724  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.001896  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.001905  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.002491  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.016372  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.746096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.103983  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.103999  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.103984  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.103993  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.104011  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.106011  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.116401  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.802017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.190076  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.191965  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.192123  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.192689  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.193118  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.194035  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.203272  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.216226  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.606751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.312031  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.316453  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.770039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.395979  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.416769  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.128747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.516508  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.9717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.601745  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.601760  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.601761  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.601778  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.601779  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.604031  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.606994  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:09.616629  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.981341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.716507  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.824965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.816520  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.858836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:09.916585  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.973085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.000499  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.001898  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.001901  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.002032  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.002032  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.002662  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.016236  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.691629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.104202  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.104237  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.104213  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.104213  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.104471  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.106183  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.116640  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.988835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.190468  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.192163  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.192304  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.192850  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.193318  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.194320  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.203463  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.216188  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.603626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.312203  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.316540  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.748928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.396194  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.416420  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.784527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.516872  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.130279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.602193  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.602269  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.602378  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.602480  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.602494  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.604406  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.607190  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:10.616855  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.18606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.716614  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.84763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.816178  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.511659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:10.916438  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.780397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.000701  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.002112  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.002112  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.002180  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.002190  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.002841  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.016446  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.812123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.104381  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.104384  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.104404  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.104409  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.104699  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.106361  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.116715  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.974494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.190632  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.192496  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.192516  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.192978  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.193690  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.194499  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.203649  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.216577  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.853471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.312361  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.316199  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.599285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.396419  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.416429  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.785196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.516487  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.841838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.602382  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.602472  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.602475  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.602610  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.602655  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.604582  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.607352  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:11.616592  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.902259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.716582  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.896621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.816463  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.836294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:11.916478  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.886395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.000984  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.002289  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.002342  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.002376  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.002298  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.003031  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.016571  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.968014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.104585  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.104630  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.104627  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.104657  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.104877  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.106545  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.116335  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.770989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.190841  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.192595  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.192716  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.193183  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.193851  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.194661  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.203907  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.216610  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.957246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.312723  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.316314  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.6676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.396626  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.416298  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.516153  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.585635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.602771  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.602780  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.602899  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.602910  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.602923  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.604807  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.607583  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:12.616241  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.620072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.716459  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.843104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.792884  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.813343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46632]
I0919 09:58:12.794702  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.353102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46632]
I0919 09:58:12.796279  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.150694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46632]
I0919 09:58:12.816371  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.789246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.916748  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.903956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.997374  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.707329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:12.999223  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.388622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.001016  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.066675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.001187  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.002453  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.002553  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.002520  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.002621  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.003188  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.016179  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.599643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.105039  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.105039  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.105039  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.105118  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.105140  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.106822  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.116596  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.926639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.191468  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.192847  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.192883  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.193342  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.194025  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.194821  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.204146  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.216298  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.727319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.312906  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.316578  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.918923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.396818  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.416430  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.877018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.493673  108095 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000427142s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 09:58:13.493736  108095 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0919 09:58:13.493746  108095 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0919 09:58:13.493754  108095 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0919 09:58:13.497263  108095 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.993913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.497959  108095 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0919 09:58:13.498011  108095 controller_utils.go:124] Update ready status of pods on node [node-0]
I0919 09:58:13.498131  108095 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"06fe35ea-1c02-408a-b043-37523639bf6c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0919 09:58:13.498405  108095 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (565.352µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35864]
I0919 09:58:13.500523  108095 httplog.go:90] POST /api/v1/namespaces/default/events: (1.96307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.500796  108095 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (1.41987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35864]
I0919 09:58:13.501194  108095 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.007886248s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 09:58:13.501404  108095 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0919 09:58:13.501464  108095 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0919 09:58:13.501521  108095 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0919 09:58:13.501902  108095 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.460012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35870]
I0919 09:58:13.502293  108095 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:58:13.497644131 +0000 UTC m=+347.363718985,}] Taint to Node node-0
I0919 09:58:13.502339  108095 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0919 09:58:13.503972  108095 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.082158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35862]
I0919 09:58:13.504402  108095 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.011049414s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 09:58:13.504444  108095 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0919 09:58:13.504454  108095 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0919 09:58:13.504463  108095 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0919 09:58:13.505208  108095 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (391.971µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35868]
I0919 09:58:13.506491  108095 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.819955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35870]
I0919 09:58:13.506758  108095 controller_utils.go:180] Recording status change NodeNotReady event message for node node-2
I0919 09:58:13.506788  108095 controller_utils.go:124] Update ready status of pods on node [node-2]
I0919 09:58:13.506908  108095 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"c07210e8-66c9-43ee-b42f-1d3bc76141ab", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-2 status is now: NodeNotReady
I0919 09:58:13.508194  108095 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (1.121826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35870]
I0919 09:58:13.508266  108095 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.721145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35868]
I0919 09:58:13.508706  108095 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-2: (1.441771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35872]
I0919 09:58:13.508728  108095 httplog.go:90] POST /api/v1/namespaces/default/events: (1.485936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35874]
I0919 09:58:13.509101  108095 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:58:13.504569772 +0000 UTC m=+347.370644642,}] Taint to Node node-1
I0919 09:58:13.509268  108095 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0919 09:58:13.509693  108095 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (276.27µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35872]
I0919 09:58:13.509756  108095 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (441.914µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35868]
I0919 09:58:13.511696  108095 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.838113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35870]
I0919 09:58:13.512043  108095 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:58:13.50666724 +0000 UTC m=+347.372742126,}] Taint to Node node-2
I0919 09:58:13.512142  108095 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0919 09:58:13.512325  108095 store.go:362] GuaranteedUpdate of /0b01a6a7-1d5a-40d7-a1cf-22226d9b2b56/minions/node-1 failed because of a conflict, going to retry
I0919 09:58:13.512363  108095 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.046892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35868]
I0919 09:58:13.512514  108095 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 09:58:13.512570  108095 taint_manager.go:438] Updating known taints on node node-1: []
I0919 09:58:13.512615  108095 taint_manager.go:459] All taints were removed from the Node node-1. Cancelling all evictions...
I0919 09:58:13.512635  108095 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2 at 2019-09-19 09:58:13.512631879 +0000 UTC m=+347.378706749
I0919 09:58:13.514388  108095 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.725054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35872]
I0919 09:58:13.514540  108095 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 09:58:13.514565  108095 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 09:58:08 +0000 UTC}]
I0919 09:58:13.514599  108095 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2 at 2019-09-19 09:58:13.514587039 +0000 UTC m=+347.380661924 to be fired at 2019-09-19 09:58:13.514587039 +0000 UTC m=+347.380661924
I0919 09:58:13.514643  108095 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2
I0919 09:58:13.514828  108095 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 09:58:03 +0000 UTC,}] Taint
I0919 09:58:13.514832  108095 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2
I0919 09:58:13.516760  108095 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/pods/testpod-2: (1.896034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35868]
I0919 09:58:13.516797  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.100246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:13.516825  108095 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/events/testpod-2.15c5ce7ea30a2ca5: (1.64013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35870]
I0919 09:58:13.603185  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.603324  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.603335  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.603344  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.603338  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.604991  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.607770  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:13.616244  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.695562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:13.716623  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.983731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:13.817380  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.675417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:13.916560  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.962967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.001423  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.002729  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.002740  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.002836  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.002740  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.003470  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.016521  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.871521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.105428  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.105513  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.105652  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.105660  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.105780  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.107034  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.116813  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.212173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.191855  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.193012  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.193041  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.193416  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.194188  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.194983  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.204345  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.216637  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.050819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.313049  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.317244  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.114151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.397033  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.416801  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.969097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.516753  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.119824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.603394  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.603406  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.603485  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.603640  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.603647  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.605139  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.607919  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:14.616397  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.749613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.716442  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.853014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.816499  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.854088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:14.919561  108095 httplog.go:90] GET /api/v1/nodes/node-1: (4.896978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.001749  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.002995  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.003032  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.003014  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.003131  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.003735  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.016564  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.909827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.105648  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.105649  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.105878  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.105927  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.106048  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.107241  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.116639  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.067417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.192009  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.193076  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.193256  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.193537  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.194363  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.195066  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.204485  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.216857  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.267867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.313176  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.316579  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.944876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.397363  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.416450  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.761183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.516452  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.775214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.603599  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.603630  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.603605  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.603758  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.603976  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.605340  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.608042  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:15.616251  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.591292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.716614  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.914692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.816485  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.858114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:15.916538  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.95975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.002075  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.003187  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.003204  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.003317  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.003336  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.003951  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.016538  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.910164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.106018  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.106018  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.106049  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.106056  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.106245  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.107426  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.116072  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.550646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.192196  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.193377  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.193464  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.193699  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.194632  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.195269  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.204698  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.216489  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.909958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.313384  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.316377  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.793261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.397604  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.416321  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.628565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.516476  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.856379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.589728  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.650428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32858]
I0919 09:58:16.591710  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.429338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32858]
I0919 09:58:16.593250  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.075404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32858]
I0919 09:58:16.603787  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.603798  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.603826  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.603876  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.604175  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.605498  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.608214  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:16.616285  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.737462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.716460  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.83751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.816762  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.017117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:16.916640  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.977694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.002303  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.003357  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.003358  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.003473  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.003478  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.004194  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.016636  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.977537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.106518  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.106579  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.106600  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.106784  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.106801  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.107731  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.116689  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.061353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.192432  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.193699  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.193701  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.193888  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.194834  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.195414  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.205033  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.216718  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.108983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.313585  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.316355  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.750428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.397851  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.416602  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.940787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.516629  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.988463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.603986  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.604009  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.604017  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.604065  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.604340  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.605672  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.608378  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:17.616354  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.783251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.716503  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.84364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.816506  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.912514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:17.903109  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.54332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:17.905006  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.187342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:17.906529  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.072657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:17.915987  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.425268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.002517  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.003596  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.003605  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.003651  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.003629  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.004355  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.016582  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.892443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.106704  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.106731  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.106704  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.106975  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.107037  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.107964  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.116413  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.796492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.192630  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.193887  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.193901  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.194082  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.195019  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.195520  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.205200  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.216504  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.913733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.313783  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.316330  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.709351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.398059  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.416667  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.977027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.513634  108095 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.02031797s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:58:18.513879  108095 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020571842s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:18.514076  108095 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020771202s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:18.514166  108095 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020862066s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:18.515497  108095 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (902.478µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.516416  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.829932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35868]
I0919 09:58:18.519100  108095 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.450301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.519359  108095 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-19 09:58:18.514291257 +0000 UTC m=+352.380366125,}] Taint to Node node-1
I0919 09:58:18.519748  108095 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 09:58:18.519781  108095 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 09:58:08 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-19 09:58:18 +0000 UTC}]
I0919 09:58:18.519813  108095 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2 at 2019-09-19 09:58:18.519803073 +0000 UTC m=+352.385877948 to be fired at 2019-09-19 09:58:18.519803073 +0000 UTC m=+352.385877948
W0919 09:58:18.519825  108095 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2}. Skipping.
I0919 09:58:18.520076  108095 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (486.615µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.522763  108095 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.001858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.523081  108095 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 09:58:18.523283  108095 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.029932308s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:58:18.523381  108095 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.030032878s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:18.523164  108095 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 09:58:18.523475  108095 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/unreachable  NoExecute 2019-09-19 09:58:18 +0000 UTC}]
I0919 09:58:18.523514  108095 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2 at 2019-09-19 09:58:18.523498595 +0000 UTC m=+352.389573472 to be fired at 2019-09-19 10:03:18.523498595 +0000 UTC m=+652.389573472
W0919 09:58:18.523538  108095 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictions3814bdce-0a88-49a9-8f27-6bb1a2ceae34/testpod-2}. Skipping.
I0919 09:58:18.523450  108095 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.030101648s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:18.523660  108095 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.030312086s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:18.523765  108095 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-19 09:58:18.523738794 +0000 UTC m=+352.389813668. Adding it to the Taint queue.
I0919 09:58:18.523855  108095 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.030630729s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:58:18.523951  108095 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.030707507s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:18.524039  108095 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.030814213s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:18.524104  108095 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.030879394s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:18.524206  108095 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-19 09:58:18.524192059 +0000 UTC m=+352.390266934. Adding it to the Taint queue.
I0919 09:58:18.604184  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.604184  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.604184  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.604209  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.604516  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.605834  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.608541  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:18.616438  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.8472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.716897  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.304845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.816371  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.743821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:18.916649  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.035457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.002790  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.003812  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.003812  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.003903  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.003999  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.004590  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.016504  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.854255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.106887  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.106887  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.106887  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.107048  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.107143  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.108143  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.116212  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.680413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.192986  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.194139  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.194140  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.194365  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.195196  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.195674  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.205300  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.216567  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.927833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.314289  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.316376  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.81414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.398267  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.416531  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.846066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.516653  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.022401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.604560  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.604794  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.604952  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.604703  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.604824  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.606000  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.608684  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:19.616369  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.808509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.716390  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.783411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.816544  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.897928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:19.916642  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.939039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.003027  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.004002  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.004008  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.004042  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.004233  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.004766  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.016620  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.016114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.107117  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.107152  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.107117  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.107175  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.107298  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.108379  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.116746  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.145989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.193153  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.194309  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.194310  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.194518  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.195387  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.195931  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.205636  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.216411  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.777826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.314467  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.316577  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.897824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.398696  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.416417  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.770709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.516626  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.931868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.604956  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.605251  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.605111  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.605134  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.605170  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.606222  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.608889  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:20.616351  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.740081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.716484  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.916462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.816456  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.81277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:20.916554  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.941874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.003239  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.004298  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.004305  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.004342  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.004428  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.004996  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.016539  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.933309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.107327  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.107327  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.107334  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.107361  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.107444  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.108593  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.116498  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.903194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.193363  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.194494  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.194494  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.194669  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.195567  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.196078  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.205832  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.216314  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.660978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.314632  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.316553  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.874852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.398918  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.416576  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.969034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.516504  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.869462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.605409  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.605521  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.605531  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.605603  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.605630  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.606436  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.609070  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:21.616549  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.965531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.716479  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.815589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.816702  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.004826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:21.916710  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.049655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.003469  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.004471  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.004474  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.004547  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.004619  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.005210  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.016410  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.774007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.107673  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.107678  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.107660  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.107894  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.107922  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.108981  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.116367  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.744337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.193797  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.194691  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.194693  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.194910  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.195785  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.196213  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.206027  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.216588  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.873994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.314791  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.316682  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.012355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.399232  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.416278  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.654715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.516541  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.91421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.605667  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.605705  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.605667  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.605752  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.605808  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.606648  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.609422  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:22.616796  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.146095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.716703  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.018866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.793267  108095 httplog.go:90] GET /api/v1/namespaces/default: (2.004576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46632]
I0919 09:58:22.795645  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.526258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46632]
I0919 09:58:22.797341  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.145786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46632]
I0919 09:58:22.816413  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.803153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.916446  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.821027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.997619  108095 httplog.go:90] GET /api/v1/namespaces/default: (1.740887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:22.999605  108095 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.413377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.001572  108095 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.442699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.003816  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.004629  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.004647  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.004683  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.004774  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.005427  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.016880  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.268816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.107822  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.107825  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.107843  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.108252  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.108254  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.109219  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.116260  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.705937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.194029  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.194842  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.194965  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.195014  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.196048  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.196416  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.206187  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.216597  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.900376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.314989  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.317191  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.546048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.399487  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.416589  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.962274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.516598  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.924816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.524529  108095 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.031168755s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:58:23.524592  108095 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.031244986s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:23.524607  108095 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.031260098s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:23.524618  108095 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.03127209s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:23.524681  108095 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.031458064s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:58:23.524695  108095 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.031472482s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:23.524719  108095 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.031496235s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:23.524729  108095 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.031506454s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:23.524768  108095 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.03146553s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 09:58:23.524779  108095 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.03147723s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:23.524789  108095 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.031487151s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:23.524803  108095 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.031500805s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 09:58:03 +0000 UTC,LastTransitionTime:2019-09-19 09:58:13 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 09:58:23.524832  108095 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-19 09:58:23.524815173 +0000 UTC m=+357.390890048. Adding it to the Taint queue.
I0919 09:58:23.605899  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.605914  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.605913  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.605872  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.606105  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.606854  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.609607  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:23.616329  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.767712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.716268  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.664186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.816335  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.726409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:23.920231  108095 httplog.go:90] GET /api/v1/nodes/node-1: (5.561818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.004047  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.004823  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.004848  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.004857  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.004961  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.005678  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.016829  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.140224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.108139  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.108180  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.108192  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.108501  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.108553  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.109387  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.116326  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.748667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.194349  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.194979  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.195075  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.195147  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.196326  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.196645  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.207968  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.216499  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.839852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.315169  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.316513  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.85542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.399724  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.416828  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.118766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.516355  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.767432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.606115  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.606138  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.606115  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.606132  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.606331  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.607012  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.610028  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:24.610141  108095 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.603487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:24.611890  108095 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.303138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:24.613472  108095 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.098742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46516]
I0919 09:58:24.618896  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.519378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.716417  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.83606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.816346  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.700026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:24.916486  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.818922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.004284  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.005142  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.005189  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.005278  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.005386  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.005862  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.016430  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.809479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.108308  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.108341  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.108384  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.108705  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.108733  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.109541  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.116662  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.194587  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.195169  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.195248  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.195291  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.196509  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.196786  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.208147  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.216170  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.563701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.315384  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.316358  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.792239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.399956  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.416768  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.022073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.516421  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.816277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.606309  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.606309  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.606309  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.606450  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.606316  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.607195  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.610205  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:25.616584  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.955585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.716624  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.909534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.816468  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.810182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:25.916695  108095 httplog.go:90] GET /api/v1/nodes/node-1: (2.104084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:26.004527  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:26.005333  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:26.005333  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:26.005516  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:26.005736  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:26.006044  108095 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 09:58:26.016386  108095 httplog.go:90] GET /api/v1/nodes/node-1: (1.762808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35876]
I0919 09:58:26.108635  108095 reflector.go:236] k8s.io/client-go/informers