This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 8 failed / 2860 succeeded
Started2019-09-19 11:08
Elapsed29m25s
Revision
Buildergke-prow-ssd-pool-1a225945-5dqn
Refs master:b8866250
82703:2d55adec
podbc4ebd58-dacd-11e9-b7bb-32cecfce85d6
infra-commitfe9f237a8
podbc4ebd58-dacd-11e9-b7bb-32cecfce85d6
repok8s.io/kubernetes
repo-commit93909cea06b1a2c6b82721a27e0e16664677efb3
repos{u'k8s.io/kubernetes': u'master:b88662505d288297750becf968bf307dacf872fa,82703:2d55adec6066f53a9f810d33abd9810fec0fe433'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 34s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0919 11:32:02.974407  108424 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 11:32:02.974428  108424 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 11:32:02.974441  108424 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 11:32:02.974451  108424 master.go:259] Using reconciler: 
I0919 11:32:02.976246  108424 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:02.976925  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:02.976958  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:02.978150  108424 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 11:32:02.978194  108424 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:02.978618  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:02.978646  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:02.978772  108424 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 11:32:02.981761  108424 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:32:02.981817  108424 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:02.981842  108424 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:32:02.982047  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:02.982074  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:02.983772  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:02.984165  108424 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 11:32:02.984208  108424 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:02.984576  108424 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 11:32:02.984927  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:02.984970  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:02.985631  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:02.988381  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:02.988827  108424 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 11:32:02.989051  108424 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:02.989224  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:02.989259  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:02.989355  108424 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 11:32:02.995064  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:02.995095  108424 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 11:32:02.995066  108424 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 11:32:02.995341  108424 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:02.995587  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:02.995611  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:02.996434  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:02.996976  108424 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 11:32:02.997176  108424 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:02.997384  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:02.997411  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:02.997505  108424 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 11:32:02.999099  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.001352  108424 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 11:32:03.001640  108424 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.001867  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.001895  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.002025  108424 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 11:32:03.005336  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.005653  108424 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 11:32:03.005773  108424 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 11:32:03.005889  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.010981  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.011880  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.011928  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.013034  108424 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 11:32:03.013292  108424 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.013506  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.013537  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.013650  108424 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 11:32:03.014993  108424 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 11:32:03.015232  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.015427  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.015477  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.015559  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.015650  108424 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 11:32:03.017552  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.019206  108424 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 11:32:03.019423  108424 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 11:32:03.019470  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.019721  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.019756  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.021222  108424 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 11:32:03.021490  108424 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.021774  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.021825  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.021941  108424 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 11:32:03.022061  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.023341  108424 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 11:32:03.023430  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.023557  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.023769  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.023774  108424 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 11:32:03.023798  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.025167  108424 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 11:32:03.025244  108424 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 11:32:03.025223  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.025435  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.025459  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.026268  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.028313  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.029083  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.029118  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.030653  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.030837  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.030865  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.034675  108424 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 11:32:03.034714  108424 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 11:32:03.034773  108424 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 11:32:03.035262  108424 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.035545  108424 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.036343  108424 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.037128  108424 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.053793  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.053327  108424 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.077843  108424 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.078491  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.078760  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.079109  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.080031  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.081263  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.081805  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.082959  108424 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.114578  108424 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.136469  108424 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.137076  108424 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.138688  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.139336  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.140133  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.140974  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.141527  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.142003  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.142794  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.144128  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.144690  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.145739  108424 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.146836  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.147526  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.147959  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.149003  108424 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.149593  108424 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.151676  108424 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.152701  108424 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.153670  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.154566  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.155021  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.155244  108424 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 11:32:03.155344  108424 master.go:461] Enabling API group "authentication.k8s.io".
I0919 11:32:03.155446  108424 master.go:461] Enabling API group "authorization.k8s.io".
I0919 11:32:03.155721  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.156106  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.156218  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.158478  108424 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:32:03.158644  108424 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:32:03.160708  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.163446  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.164123  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.164397  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.167111  108424 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:32:03.167935  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.168393  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.168744  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.168929  108424 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:32:03.172046  108424 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:32:03.172077  108424 master.go:461] Enabling API group "autoscaling".
I0919 11:32:03.172283  108424 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.172597  108424 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:32:03.174340  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.175595  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.175627  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.176891  108424 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 11:32:03.177099  108424 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.177262  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.177287  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.177395  108424 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 11:32:03.179678  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.180082  108424 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 11:32:03.180113  108424 master.go:461] Enabling API group "batch".
I0919 11:32:03.180311  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.180540  108424 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 11:32:03.182719  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.184273  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.184309  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.185505  108424 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 11:32:03.185545  108424 master.go:461] Enabling API group "certificates.k8s.io".
I0919 11:32:03.185783  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.185977  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.186004  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.186110  108424 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 11:32:03.188814  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.189216  108424 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:32:03.189574  108424 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:32:03.190655  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.275795  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.275947  108424 watch_cache.go:405] Replace watchCache (rev: 30563) 
I0919 11:32:03.276990  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.277089  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.278577  108424 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:32:03.278606  108424 master.go:461] Enabling API group "coordination.k8s.io".
I0919 11:32:03.278628  108424 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 11:32:03.278762  108424 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:32:03.278845  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.278997  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.279020  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.280420  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.280890  108424 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:32:03.280921  108424 master.go:461] Enabling API group "extensions".
I0919 11:32:03.281162  108424 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.281335  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.281379  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.281502  108424 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:32:03.282938  108424 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 11:32:03.283030  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.283146  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.283313  108424 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 11:32:03.283344  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.283389  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.285638  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.286467  108424 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:32:03.286692  108424 master.go:461] Enabling API group "networking.k8s.io".
I0919 11:32:03.286595  108424 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:32:03.287574  108424 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.292736  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.292789  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.294143  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.295965  108424 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 11:32:03.296032  108424 master.go:461] Enabling API group "node.k8s.io".
I0919 11:32:03.296076  108424 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 11:32:03.296248  108424 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.296464  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.296616  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.297917  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.298042  108424 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 11:32:03.298212  108424 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 11:32:03.298261  108424 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.298451  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.298482  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.300765  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.300787  108424 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 11:32:03.300806  108424 master.go:461] Enabling API group "policy".
I0919 11:32:03.300858  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.301058  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.301086  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.301099  108424 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 11:32:03.302802  108424 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:32:03.302905  108424 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:32:03.303287  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.303529  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.303560  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.304203  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.304591  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.306854  108424 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:32:03.306909  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.307091  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.307116  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.307205  108424 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:32:03.309701  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.310123  108424 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:32:03.310346  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.310579  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.310604  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.310695  108424 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:32:03.314000  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.323031  108424 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:32:03.323532  108424 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:32:03.325742  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.326991  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.327730  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.327889  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.329072  108424 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:32:03.329334  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.329532  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.329559  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.329671  108424 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:32:03.331180  108424 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:32:03.331233  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.331438  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.331462  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.331554  108424 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:32:03.335206  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.336333  108424 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:32:03.336764  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.336963  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.336992  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.337106  108424 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:32:03.339524  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.339954  108424 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:32:03.339986  108424 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 11:32:03.342698  108424 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:32:03.344906  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.357542  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.357846  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.357886  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.359314  108424 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:32:03.359603  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.359928  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.359963  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.360150  108424 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:32:03.361760  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.363404  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.363427  108424 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:32:03.363452  108424 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 11:32:03.363614  108424 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 11:32:03.363698  108424 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:32:03.363807  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.364042  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.364067  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.365735  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.365744  108424 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:32:03.365932  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.366193  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.366412  108424 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:32:03.367289  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.367301  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.368910  108424 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:32:03.369430  108424 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:32:03.369630  108424 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.370553  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.370798  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.371540  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.372717  108424 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 11:32:03.372778  108424 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.372954  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.372988  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.373107  108424 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 11:32:03.374448  108424 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 11:32:03.374660  108424 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 11:32:03.374908  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.375843  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.375995  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.375861  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.376935  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.377808  108424 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:32:03.378011  108424 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:32:03.378521  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.379018  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.379347  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.378839  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.380868  108424 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:32:03.380896  108424 master.go:461] Enabling API group "storage.k8s.io".
I0919 11:32:03.380954  108424 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:32:03.381118  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.381317  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.381338  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.382807  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.383081  108424 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 11:32:03.383431  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.383828  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.384018  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.383962  108424 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 11:32:03.385223  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.386218  108424 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 11:32:03.386328  108424 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 11:32:03.387249  108424 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.387640  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.387768  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.389764  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.391111  108424 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 11:32:03.391337  108424 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 11:32:03.391753  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.391907  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.391931  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.392872  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.393401  108424 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 11:32:03.393455  108424 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 11:32:03.393696  108424 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.393971  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.394177  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.394298  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.395719  108424 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 11:32:03.395743  108424 master.go:461] Enabling API group "apps".
I0919 11:32:03.395747  108424 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 11:32:03.395791  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.395959  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.395983  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.398180  108424 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:32:03.398227  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.398233  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.398382  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.398404  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.398506  108424 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:32:03.400029  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.400294  108424 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:32:03.400393  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.400587  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.400612  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.400629  108424 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:32:03.402612  108424 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:32:03.402659  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.402830  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.402854  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.402942  108424 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:32:03.403738  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.404124  108424 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:32:03.404145  108424 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 11:32:03.404145  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.404181  108424 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.404490  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.404512  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:03.404564  108424 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:32:03.405779  108424 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:32:03.405799  108424 master.go:461] Enabling API group "events.k8s.io".
I0919 11:32:03.405921  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.406028  108424 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.406220  108424 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:32:03.406230  108424 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.406646  108424 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.406767  108424 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.406866  108424 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.406985  108424 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.407176  108424 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.407264  108424 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.407351  108424 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.407507  108424 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.408519  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.408816  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.409739  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.410044  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.412075  108424 watch_cache.go:405] Replace watchCache (rev: 30568) 
I0919 11:32:03.415768  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.416497  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.417820  108424 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.418271  108424 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.419563  108424 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.419981  108424 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:32:03.420152  108424 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 11:32:03.421120  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.421481  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.421874  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.423766  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.426410  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.429467  108424 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.430576  108424 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.433026  108424 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.437427  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.438857  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.439834  108424 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:32:03.440908  108424 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 11:32:03.443756  108424 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.444925  108424 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.447124  108424 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.449804  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.452865  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.454190  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.457345  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.459300  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.460972  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.463728  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.465122  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:32:03.465420  108424 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 11:32:03.466356  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.467199  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:32:03.467387  108424 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 11:32:03.469299  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.470324  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.471011  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.471901  108424 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.472808  108424 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.474276  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.475640  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:32:03.476180  108424 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 11:32:03.478523  108424 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.481496  108424 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.482072  108424 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.483524  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.483982  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.484579  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.485452  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.485937  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.486510  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.487696  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.488068  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.488480  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:32:03.488741  108424 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 11:32:03.488828  108424 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 11:32:03.489863  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.490743  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.491762  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.492859  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.494204  108424 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"34cf0a26-f8a1-4c9a-be41-ab39c3bb05be", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:32:03.501652  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.788109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36674]
I0919 11:32:03.502443  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.502473  108424 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 11:32:03.502484  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.502494  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.502502  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.502509  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.502539  108424 httplog.go:90] GET /healthz: (224.04µs) 0 [Go-http-client/1.1 127.0.0.1:36674]
I0919 11:32:03.504819  108424 httplog.go:90] GET /api/v1/services: (1.160989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.509259  108424 httplog.go:90] GET /api/v1/services: (1.375914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.519547  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.519591  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.519604  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.519615  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.519623  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.519686  108424 httplog.go:90] GET /healthz: (743.397µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36674]
I0919 11:32:03.520841  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.899033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.522731  108424 httplog.go:90] GET /api/v1/services: (1.483324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36674]
I0919 11:32:03.523593  108424 httplog.go:90] POST /api/v1/namespaces: (2.326528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36678]
I0919 11:32:03.524769  108424 httplog.go:90] GET /api/v1/services: (1.899677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.526914  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.610694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36678]
I0919 11:32:03.529126  108424 httplog.go:90] POST /api/v1/namespaces: (1.766001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.530915  108424 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.240273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.533348  108424 httplog.go:90] POST /api/v1/namespaces: (1.92108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.604276  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.604317  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.604329  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.604338  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.604376  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.604416  108424 httplog.go:90] GET /healthz: (345.002µs) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:03.621753  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.621791  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.621803  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.621813  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.621821  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.621851  108424 httplog.go:90] GET /healthz: (275.306µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.704239  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.704286  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.704298  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.704308  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.704316  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.704348  108424 httplog.go:90] GET /healthz: (265.679µs) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:03.722050  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.722094  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.722108  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.722118  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.722127  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.722165  108424 httplog.go:90] GET /healthz: (288.71µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.804327  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.804384  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.804403  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.804413  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.804422  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.804458  108424 httplog.go:90] GET /healthz: (302.737µs) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:03.821655  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.821693  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.821708  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.821718  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.821726  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.821759  108424 httplog.go:90] GET /healthz: (278.374µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.905308  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.905343  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.905356  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.905381  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.905390  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.905421  108424 httplog.go:90] GET /healthz: (272.752µs) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:03.923913  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:32:03.923959  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:03.923973  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:03.923982  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:03.923990  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:03.924022  108424 httplog.go:90] GET /healthz: (282.409µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:03.974130  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:32:03.974231  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:32:04.007950  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.007984  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.007998  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.008007  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.008055  108424 httplog.go:90] GET /healthz: (2.875636ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:04.023815  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.023850  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.023860  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.023868  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.023914  108424 httplog.go:90] GET /healthz: (1.69054ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
E0919 11:32:04.068909  108424 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:35423/apis/events.k8s.io/v1beta1/namespaces/permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/events: dial tcp 127.0.0.1:35423: connect: connection refused' (may retry after sleeping)
I0919 11:32:04.105531  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.105571  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.105582  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.105590  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.105633  108424 httplog.go:90] GET /healthz: (1.54368ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:04.123384  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.123425  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.123437  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.123448  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.123512  108424 httplog.go:90] GET /healthz: (1.973994ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.205260  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.205295  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.205306  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.205315  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.205406  108424 httplog.go:90] GET /healthz: (1.272171ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:04.222753  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.222787  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.222798  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.222806  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.222854  108424 httplog.go:90] GET /healthz: (1.353802ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.305012  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.305042  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.305052  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.305060  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.305107  108424 httplog.go:90] GET /healthz: (1.089773ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:04.323209  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.323240  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.323251  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.323259  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.323304  108424 httplog.go:90] GET /healthz: (1.898911ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.406006  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.406042  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.406054  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.406064  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.406109  108424 httplog.go:90] GET /healthz: (1.948612ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:04.422414  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.422442  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.422451  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:32:04.422459  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:32:04.422503  108424 httplog.go:90] GET /healthz: (1.109132ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.501609  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.078462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36720]
I0919 11:32:04.501831  108424 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.079828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.503404  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.75671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36674]
I0919 11:32:04.505228  108424 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.441213ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36674]
I0919 11:32:04.505506  108424 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.982532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.506047  108424 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 11:32:04.506800  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.462291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36720]
I0919 11:32:04.507637  108424 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.430167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36674]
I0919 11:32:04.507990  108424 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.716866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.509116  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.509150  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:32:04.509161  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.509183  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.102746ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36720]
I0919 11:32:04.509226  108424 httplog.go:90] GET /healthz: (4.696071ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:04.509738  108424 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.73066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36674]
I0919 11:32:04.509921  108424 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 11:32:04.509934  108424 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 11:32:04.511027  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.479412ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36720]
I0919 11:32:04.512501  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.141135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.513728  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (894.189µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.515962  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.92032ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.517201  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (731.024µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.518227  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (717.904µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.520654  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (2.107016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.522790  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.522814  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.522845  108424 httplog.go:90] GET /healthz: (1.050173ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.523270  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.208275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.523910  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 11:32:04.525137  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (826.731µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.527579  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.914338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.527990  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 11:32:04.536730  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (8.387407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.543999  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.382344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.544886  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 11:32:04.546246  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.15955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.549318  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.518277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.549871  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:32:04.551222  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.055654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.554726  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.090649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.555100  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 11:32:04.556498  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.20342ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.558705  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.762543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.558993  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 11:32:04.560856  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.527407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.563326  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.966226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.563675  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 11:32:04.564826  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (968.008µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.567143  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.784856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.567539  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 11:32:04.568712  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (866.692µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.574224  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.628727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.574485  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 11:32:04.576133  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.464337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.578500  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.842617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.578945  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 11:32:04.580350  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (890.711µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.582912  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.692198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.583485  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 11:32:04.584592  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (883.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.587004  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.841038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.587279  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 11:32:04.588851  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.382909ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.590951  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.727002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.593540  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 11:32:04.594583  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (830.603µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.597106  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.183191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.597351  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 11:32:04.598514  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (940.26µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.600750  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.734921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.600941  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 11:32:04.602040  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (917.888µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.603683  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.231214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.603834  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 11:32:04.604993  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.605016  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.605047  108424 httplog.go:90] GET /healthz: (1.20878ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:04.605240  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.253367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.607861  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.84118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.608039  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 11:32:04.611650  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (3.45579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.614309  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.066001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.614785  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:32:04.616899  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.946754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.619241  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.953541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.619461  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 11:32:04.623509  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (3.893906ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.625494  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.625521  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.625557  108424 httplog.go:90] GET /healthz: (2.680629ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.625892  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.960131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.626046  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 11:32:04.636931  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (10.646922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.640261  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.440372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.640699  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 11:32:04.642184  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.138842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.644511  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.717348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.644681  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 11:32:04.645892  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (939.623µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.649258  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.90417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.649478  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 11:32:04.650575  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (954.884µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.657102  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.018763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.657340  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:32:04.659118  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.133125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.662166  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.544818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.662607  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 11:32:04.664233  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.121756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.666689  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.957309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.666946  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:32:04.668724  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.52726ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.670836  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.680763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.671027  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 11:32:04.672591  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.372084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.675189  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.256297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.676409  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:32:04.677712  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (972.264µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.679972  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.866061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.680188  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:32:04.681297  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (884.141µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.683692  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.025059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.683991  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:32:04.685624  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.382454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.689490  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.435294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.689720  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:32:04.691101  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.225804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.693818  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.744258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.694078  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:32:04.695061  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (822.756µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.696972  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.415877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.697165  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:32:04.698255  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (923.357µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.700178  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.59424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.700464  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:32:04.702175  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.539013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.705276  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.705298  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.705329  108424 httplog.go:90] GET /healthz: (928.534µs) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:04.706416  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.871593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.706818  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:32:04.708034  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.05235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.710648  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.152308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.711036  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:32:04.714379  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.115415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.716582  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.792919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.716811  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:32:04.717946  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (913.927µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.719908  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.574985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.720203  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:32:04.721185  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (828.415µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.723243  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.723267  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.723308  108424 httplog.go:90] GET /healthz: (1.223101ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.723532  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.992876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.723755  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:32:04.724853  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (891.359µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.726846  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.727048  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:32:04.728002  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (811.648µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.729977  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.525912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.733766  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:32:04.736326  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (2.240411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.738279  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.543275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.738609  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:32:04.739563  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (759.472µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.741629  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.54522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.741832  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:32:04.742908  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (883.537µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.745225  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.925585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.745486  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:32:04.746694  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (943.997µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.750060  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.066691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.750228  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:32:04.751347  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (995.565µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.754079  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.400982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.754249  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:32:04.755204  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (800.987µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.756845  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.325675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.757168  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:32:04.758246  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (782.923µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.760170  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.520876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.760712  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:32:04.762102  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (962.993µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.768697  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.924098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.768947  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:32:04.770329  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (909.952µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.772794  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.562447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.773016  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:32:04.774349  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.176577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.777197  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.445765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.777447  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:32:04.779170  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.448138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.781171  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.527402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.781473  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:32:04.782433  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (739.333µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.802511  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.568929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.802768  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:32:04.805012  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.805039  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.805074  108424 httplog.go:90] GET /healthz: (1.099701ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:04.821092  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.224842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.822734  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.822763  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.822802  108424 httplog.go:90] GET /healthz: (1.275414ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.842673  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.627833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.842956  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 11:32:04.861543  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.615861ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.886386  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.812444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.886701  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 11:32:04.902399  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.274325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.906801  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.906846  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.906892  108424 httplog.go:90] GET /healthz: (2.474126ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:04.922284  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:04.922327  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:04.922382  108424 httplog.go:90] GET /healthz: (1.034804ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:04.922442  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.527267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.922665  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 11:32:04.941388  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.486995ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.962973  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.749586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:04.963930  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:32:04.981345  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.442318ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.002295  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.32529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.002791  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 11:32:05.005037  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.005063  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.005098  108424 httplog.go:90] GET /healthz: (1.113934ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:05.021298  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.381391ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.022214  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.022241  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.022276  108424 httplog.go:90] GET /healthz: (819.59µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.042390  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.523194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.042669  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:32:05.061140  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.298319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.086055  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.566612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.086393  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 11:32:05.101305  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.472085ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.104975  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.105016  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.105055  108424 httplog.go:90] GET /healthz: (1.08835ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:05.122326  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.122378  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.123386  108424 httplog.go:90] GET /healthz: (2.045457ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.123256  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.349708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.123710  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:32:05.141514  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.670594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.162561  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.630092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.162815  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:32:05.181426  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.550549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.202470  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.564991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.202740  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 11:32:05.204963  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.204987  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.205021  108424 httplog.go:90] GET /healthz: (1.042826ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:05.223078  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.065062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.223239  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.223259  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.223286  108424 httplog.go:90] GET /healthz: (1.626585ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.251398  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.34639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.251694  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:32:05.264427  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.872161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.282162  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.312898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.282447  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:32:05.301390  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.538616ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.305020  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.305049  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.305089  108424 httplog.go:90] GET /healthz: (1.086839ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:05.322340  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.403128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.322617  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:32:05.323304  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.323329  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.323628  108424 httplog.go:90] GET /healthz: (1.05992ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.341337  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.392039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.362475  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.5647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.362974  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:32:05.381813  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.813658ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.402473  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.552755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.402743  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:32:05.405047  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.405077  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.405116  108424 httplog.go:90] GET /healthz: (1.141863ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:05.421302  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.437834ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.422263  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.422293  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.422328  108424 httplog.go:90] GET /healthz: (1.012647ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.442592  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.723501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.443123  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:32:05.461860  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.920448ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.482573  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.649591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.482962  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:32:05.501489  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.612137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.504956  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.504983  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.505035  108424 httplog.go:90] GET /healthz: (1.091348ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:05.522808  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.950046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.523439  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:32:05.528107  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.528137  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.528184  108424 httplog.go:90] GET /healthz: (5.22831ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.541281  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.385414ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.561975  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.121715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.562483  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:32:05.581319  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.450692ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.602178  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.271452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.602460  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:32:05.605064  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.605100  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.605136  108424 httplog.go:90] GET /healthz: (1.168996ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:05.621730  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.830389ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.622815  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.622840  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.622869  108424 httplog.go:90] GET /healthz: (886.817µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.642155  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.296346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.642704  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:32:05.661260  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.358931ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.682290  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.403641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.682836  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:32:05.700987  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.151425ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.705257  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.705287  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.705335  108424 httplog.go:90] GET /healthz: (1.363405ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:05.722195  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.722227  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.722262  108424 httplog.go:90] GET /healthz: (955.37µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.722448  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.584354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.722696  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:32:05.741027  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.180955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.768434  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.52325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.768692  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:32:05.781551  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.646661ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.802582  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.693591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.802822  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:32:05.805029  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.805056  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.805709  108424 httplog.go:90] GET /healthz: (1.708511ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:05.821305  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.415184ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.823020  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.823044  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.823076  108424 httplog.go:90] GET /healthz: (1.276206ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.842481  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.562803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.842865  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:32:05.861693  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.667854ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.882771  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.837797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.883418  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:32:05.901756  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.888726ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.906271  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.906302  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.906338  108424 httplog.go:90] GET /healthz: (1.442171ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:05.923830  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:05.923865  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:05.923908  108424 httplog.go:90] GET /healthz: (1.817181ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:05.924319  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.421889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.925025  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:32:05.941257  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.376668ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.962772  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.836124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:05.963049  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:32:05.982957  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.813635ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.002346  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.469808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.002772  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:32:06.010860  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.010893  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.010940  108424 httplog.go:90] GET /healthz: (6.996528ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:06.021267  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.383154ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.023046  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.023074  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.023118  108424 httplog.go:90] GET /healthz: (1.338598ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.042464  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.589136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.042753  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:32:06.061544  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.621127ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.082565  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.634364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.082821  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:32:06.101327  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.473745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.105610  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.105640  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.105678  108424 httplog.go:90] GET /healthz: (1.555694ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:06.122299  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.459565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.122578  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:32:06.122693  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.122709  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.122736  108424 httplog.go:90] GET /healthz: (1.343129ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.141774  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.722321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.162799  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.877034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.163113  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:32:06.181483  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.617692ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.202667  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.814594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.202947  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:32:06.209230  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.209261  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.209299  108424 httplog.go:90] GET /healthz: (1.249305ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:06.221212  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.379594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.223417  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.223445  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.223480  108424 httplog.go:90] GET /healthz: (1.720165ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.242190  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.328776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.242621  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:32:06.261548  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.636071ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.264239  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.238684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.282903  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.039669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.283437  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 11:32:06.301170  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.220065ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.303320  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.442907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.304893  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.304922  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.304969  108424 httplog.go:90] GET /healthz: (852.098µs) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:06.322342  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.404712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.322639  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:32:06.322773  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.322807  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.322841  108424 httplog.go:90] GET /healthz: (1.437714ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.345660  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.780982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.348542  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.492645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.361978  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.109509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.362634  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:32:06.382428  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.669353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.385036  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.146069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.402379  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.489879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.402660  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:32:06.405828  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.405860  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.405900  108424 httplog.go:90] GET /healthz: (1.837616ms) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:06.421224  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.344865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.423611  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.423641  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.423679  108424 httplog.go:90] GET /healthz: (2.397445ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.423833  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.185015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.442413  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.518541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.442666  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:32:06.462639  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.654471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.464710  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.469374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.483547  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.009813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.483786  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:32:06.501058  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.202429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.503025  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.516197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.504698  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.504721  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.504751  108424 httplog.go:90] GET /healthz: (845.711µs) 0 [Go-http-client/1.1 127.0.0.1:36722]
I0919 11:32:06.524539  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (4.665538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.524701  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.524719  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.524750  108424 httplog.go:90] GET /healthz: (3.001685ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.525104  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:32:06.544110  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.888815ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.547289  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.10168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.562384  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.38034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.562678  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 11:32:06.581220  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.345213ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.582951  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.2517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.605976  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.189913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.606247  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.606269  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.606302  108424 httplog.go:90] GET /healthz: (2.188977ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:06.610745  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:32:06.621420  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.574537ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.623439  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.508633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.623621  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.623639  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.623668  108424 httplog.go:90] GET /healthz: (2.392054ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.648074  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (7.837421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.648650  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:32:06.664350  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (4.482361ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.672685  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (7.833744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.684981  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.067352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.685298  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:32:06.702397  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.555418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.705345  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.705420  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.705461  108424 httplog.go:90] GET /healthz: (1.522929ms) 0 [Go-http-client/1.1 127.0.0.1:36676]
I0919 11:32:06.706322  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.105494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.725643  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.63561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.725914  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:32:06.727471  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:32:06.727496  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:32:06.727532  108424 httplog.go:90] GET /healthz: (5.997663ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.742521  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (2.399227ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.745415  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.396538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.764419  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.766198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.764709  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:32:06.781458  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.523865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.783575  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.664119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.802510  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.516482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.802776  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:32:06.808805  108424 httplog.go:90] GET /healthz: (4.334054ms) 200 [Go-http-client/1.1 127.0.0.1:36676]
W0919 11:32:06.810012  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810083  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810099  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810141  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810155  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810167  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810178  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810203  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810219  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810231  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:32:06.810295  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:32:06.810318  108424 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 11:32:06.810330  108424 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 11:32:06.810587  108424 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 11:32:06.810855  108424 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 11:32:06.810876  108424 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 11:32:06.812450  108424 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (806.154µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:06.813311  108424 get.go:251] Starting watch for /api/v1/pods, rv=30563 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m42s
I0919 11:32:06.822540  108424 httplog.go:90] GET /healthz: (1.001847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.825001  108424 httplog.go:90] GET /api/v1/namespaces/default: (2.025813ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.829906  108424 httplog.go:90] POST /api/v1/namespaces: (3.863979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.834237  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (3.957325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.847849  108424 httplog.go:90] POST /api/v1/namespaces/default/services: (11.20033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.850142  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.80312ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.880020  108424 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (4.539674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.910873  108424 shared_informer.go:227] caches populated
I0919 11:32:06.910906  108424 shared_informer.go:204] Caches are synced for scheduler 
I0919 11:32:06.911286  108424 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.911310  108424 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.911771  108424 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.911790  108424 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912245  108424 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912260  108424 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912335  108424 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (641.79µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.912722  108424 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912738  108424 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912754  108424 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912772  108424 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912789  108424 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912886  108424 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912897  108424 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.912741  108424 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.913279  108424 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.913293  108424 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.913387  108424 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (403.76µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.913534  108424 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30568 labels= fields= timeout=5m43s
I0919 11:32:06.914120  108424 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (574.713µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36890]
I0919 11:32:06.914134  108424 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (412.601µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0919 11:32:06.914188  108424 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (486.277µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36884]
I0919 11:32:06.914310  108424 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (422.412µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:06.914640  108424 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (353.919µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36886]
I0919 11:32:06.914647  108424 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30568 labels= fields= timeout=6m39s
I0919 11:32:06.914966  108424 get.go:251] Starting watch for /api/v1/services, rv=30866 labels= fields= timeout=8m55s
I0919 11:32:06.915095  108424 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (363.724µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36888]
I0919 11:32:06.915137  108424 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30563 labels= fields= timeout=5m47s
I0919 11:32:06.915450  108424 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30568 labels= fields= timeout=8m6s
I0919 11:32:06.915453  108424 get.go:251] Starting watch for /api/v1/nodes, rv=30563 labels= fields= timeout=5m14s
I0919 11:32:06.915615  108424 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30568 labels= fields= timeout=6m34s
I0919 11:32:06.915725  108424 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30568 labels= fields= timeout=9m12s
I0919 11:32:06.915814  108424 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.915828  108424 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.916051  108424 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.916069  108424 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 11:32:06.916681  108424 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (418.326µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36898]
I0919 11:32:06.916886  108424 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (399.37µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0919 11:32:06.917420  108424 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30563 labels= fields= timeout=7m56s
I0919 11:32:06.917653  108424 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30563 labels= fields= timeout=8m50s
E0919 11:32:06.979226  108424 factory.go:590] Error getting pod permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/test-pod for retry: Get http://127.0.0.1:35423/api/v1/namespaces/permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/pods/test-pod: dial tcp 127.0.0.1:35423: connect: connection refused; retrying...
I0919 11:32:07.011218  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011253  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011260  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011267  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011273  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011280  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011286  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011292  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011298  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011310  108424 shared_informer.go:227] caches populated
I0919 11:32:07.011331  108424 shared_informer.go:227] caches populated
I0919 11:32:07.014175  108424 httplog.go:90] POST /api/v1/nodes: (2.330882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.014838  108424 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0919 11:32:07.017849  108424 httplog.go:90] PUT /api/v1/nodes/testnode/status: (3.169275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.020201  108424 httplog.go:90] POST /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods: (1.811853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.021069  108424 scheduling_queue.go:830] About to try and schedule pod node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pidpressure-fake-name
I0919 11:32:07.021087  108424 scheduler.go:530] Attempting to schedule pod: node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pidpressure-fake-name
I0919 11:32:07.021228  108424 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pidpressure-fake-name", node "testnode"
I0919 11:32:07.021244  108424 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0919 11:32:07.021287  108424 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0919 11:32:07.023296  108424 httplog.go:90] POST /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name/binding: (1.731923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.023718  108424 scheduler.go:662] pod node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0919 11:32:07.028617  108424 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/events: (4.6033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.123285  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.808014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.223495  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.049491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.323715  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.127795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.423284  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.795658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.524881  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.152882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.623396  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.841907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.723476  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.962676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.823704  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.132842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:07.913080  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:07.914860  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:07.915261  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:07.915385  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:07.917312  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:07.917537  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:07.924964  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.165868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.024097  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.356232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.125098  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (3.584156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.224432  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.922321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.323378  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.849254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.423590  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.064089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.523479  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.86891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.623340  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.980747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.723799  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.02788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.823788  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.27186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:08.913268  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:08.915046  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:08.915428  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:08.915517  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:08.917558  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:08.917793  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:08.923197  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.717705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.023525  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.007749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.123940  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.448498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.223388  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.777648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.323612  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.13621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.426685  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (5.253635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.524212  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.661489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.623381  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.958297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.723682  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.120425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.823900  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.360044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:09.913634  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:09.915213  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:09.915622  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:09.915725  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:09.918402  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:09.918459  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:09.923025  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.551603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.024696  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (3.195628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.123089  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.67698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.223441  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.959447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.323257  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.750678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.425475  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (4.036668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.523446  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.974833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.623516  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.592486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.723562  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.962109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.823629  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.061972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:10.914024  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:10.915350  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:10.915870  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:10.915899  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:10.918578  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:10.918617  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:10.923565  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.003422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.023538  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.859028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.123584  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.059951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.223679  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.089124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.323787  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.231833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.424137  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.675269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.523979  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.445738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.623698  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.210818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.723354  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.84307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.823063  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.704886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.914493  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:11.915559  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:11.919898  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:11.919934  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:11.923675  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.236983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:11.924628  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:11.924656  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:12.023533  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.028774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.123639  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.065165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.223343  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.898563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.323651  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.964197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.423492  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.015224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.524266  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.739209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.623713  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.201828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.724746  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (3.179025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.823410  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.861498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.914941  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:12.915805  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:12.920105  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:12.920149  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:12.923349  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.801943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:12.924792  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:12.924825  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:13.023929  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.343421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.123504  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.956371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.223526  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.999976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.323422  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.991415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.423464  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.928831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.523827  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.29528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.623655  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.116622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.723429  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.991774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.823838  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.287031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.915109  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:13.915978  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:13.920289  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:13.920340  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:13.926115  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.464074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:13.926443  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:13.926464  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:14.023301  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.788068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.123319  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.844933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.223531  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.969809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.323852  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.298873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.424062  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.424864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.523810  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.210441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.623293  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.912075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.723489  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.003818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.823507  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.960488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.915246  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:14.916166  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:14.920470  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:14.920515  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:14.924843  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.077849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:14.926641  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:14.926742  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:15.029451  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.809513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.124640  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.982199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.223752  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.191879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.324506  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.944633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.423764  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.228344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.523727  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.194513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.624215  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.619343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.723652  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.049458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.823205  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.788925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.915447  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:15.916915  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:15.920661  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:15.920712  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:15.923397  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.891287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:15.926800  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:15.926836  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:16.023710  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.161082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:16.123290  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.772945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:16.223317  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.796311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:16.324661  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.888236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:16.423291  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.8583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:16.523800  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.248377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
E0919 11:32:16.538244  108424 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:35423/apis/events.k8s.io/v1beta1/namespaces/permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/events: dial tcp 127.0.0.1:35423: connect: connection refused' (may retry after sleeping)
I0919 11:32:16.623834  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.142478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:16.723635  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.058058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:16.824240  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.66763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:16.825000  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.233806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:16.826915  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.463685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:16.828589  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.290091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:16.915554  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:16.917378  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:16.920847  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:16.920896  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:16.923524  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.003424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:16.926976  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:16.927094  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:17.025600  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (3.962165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.123628  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.009756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.223389  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.808134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.323580  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.939515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.423229  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.474971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.523611  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.008302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.623330  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.800244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.725070  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.263633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.824866  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.977976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.915753  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:17.918215  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:17.921043  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:17.921092  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:17.923414  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.911246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:17.927187  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:17.927237  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:18.023430  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.902193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.123494  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.956996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.223408  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.874696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.323384  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.777846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.423383  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.859614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.523300  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.804388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.623309  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.847419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.723734  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.146072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.823755  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.220206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.915962  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:18.918350  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:18.921546  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:18.921597  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:18.924291  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.671508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:18.927589  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:18.927708  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:19.023260  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.784236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.123325  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.813125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.223860  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.271872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.323566  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.081657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.423513  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.932553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.523821  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.29532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.623395  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.969423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.723634  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.139585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.823413  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.79451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.916164  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:19.918801  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:19.922281  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:19.922329  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:19.923619  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.146032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:19.927768  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:19.928158  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:20.024243  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.438923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.127832  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.583651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.224912  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.743721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.323513  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.914846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.423279  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.915021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.523127  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.501379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.624033  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.535178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.723532  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.009514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.823778  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.478914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.916461  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:20.919331  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:20.922871  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:20.923800  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.652114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:20.924135  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:20.927970  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:20.928299  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:21.023310  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.831943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.123549  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.711541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.223561  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.084807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.323728  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.188102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.424330  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.635491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.524183  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.599164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.624249  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.713207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.723335  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.920123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.823433  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.899976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.916983  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:21.919453  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:21.923397  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.001916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:21.923752  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:21.924268  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:21.928123  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:21.928590  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:22.023692  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.169176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.123461  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.980486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.223116  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.627247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.323506  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.004255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.423742  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.21105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.523604  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.952089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.623985  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.496153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.723396  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.881212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.823439  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.960616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.917140  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:22.919695  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:22.923290  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.862394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:22.924534  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:22.924690  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:22.928378  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:22.928789  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:23.023745  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.29818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.123569  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.010978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.223174  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.667829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.323293  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.732107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.423609  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.159442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.523185  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.69439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.623166  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.679232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.723720  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.163092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.823330  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.838442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.917282  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:23.919810  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:23.923285  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.861786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:23.924676  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:23.924816  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:23.928568  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:23.928957  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:24.023682  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.100907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.123627  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.05004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.223418  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.970171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.323252  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.745562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.423632  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.195468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.523311  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.792538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.623372  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.804255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.725660  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.479667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.823088  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.54431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.917429  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:24.919930  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:24.923389  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.899378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:24.924855  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:24.924964  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:24.928693  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:24.930966  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:25.023077  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.592523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.123596  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.090745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.223304  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.770916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.323174  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.613624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.423578  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.009898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.523212  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.793177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.623263  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.784909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.723645  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.052381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.823181  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.719254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.917636  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:25.920136  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:25.923480  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.983867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:25.925026  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:25.925147  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:25.928921  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:25.931154  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:26.023261  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.726995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.134225  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (12.668821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.223207  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.771053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.328108  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (5.239795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.425605  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (4.161074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.524548  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.543708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.622766  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.43267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.723751  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.182352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.822918  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.449719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.824512  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.307191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.826349  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.435031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.827930  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.215257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.917857  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:26.920340  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:26.923580  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.012721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:26.925250  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:26.925285  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:26.929195  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:26.931348  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:27.023488  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.970213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:27.123298  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.79386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:27.223220  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.71831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:27.323678  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.044232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:27.423040  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.62123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
E0919 11:32:27.479182  108424 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:35423/apis/events.k8s.io/v1beta1/namespaces/permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/events: dial tcp 127.0.0.1:35423: connect: connection refused' (may retry after sleeping)
I0919 11:32:27.523458  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.948908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:27.623284  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.796686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:27.723244  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.750545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:27.823609  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.090739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:27.918086  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:27.920627  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:27.923130  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.613521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:27.925409  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:27.925456  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:27.929607  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:27.931550  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:28.022961  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.491006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.123627  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.119791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.223243  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.746512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.323535  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.891451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.424071  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.524724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.525015  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (3.28672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.623613  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.084031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.724045  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.161425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.823641  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.057244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.918274  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:28.920845  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:28.923673  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.115786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:28.925613  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:28.925619  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:28.929832  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:28.931778  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:29.025135  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.588332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.123676  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.093611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.224149  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.55682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.323264  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.765941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.424159  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.63189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.524356  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.858285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.623520  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.941246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.723232  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.779485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.823353  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.778054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.918513  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:29.920973  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:29.923434  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.913691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:29.925805  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:29.925853  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:29.930145  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:29.932035  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:30.023562  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.062472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.123528  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.016705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.223783  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.377943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.323289  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.811524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.423504  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.027523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.523244  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.796405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.623626  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.099222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.723682  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.196075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.824132  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.531165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.918695  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:30.921186  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:30.923227  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.701896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:30.926016  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:30.926016  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:30.932216  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:30.932595  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:31.023438  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.901029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.123156  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.729074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.223293  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.778843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.323717  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.198307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.423303  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.859697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.523479  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.944864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.622926  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.503458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.723127  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.658187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.823400  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.863023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.918892  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:31.921321  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:31.923489  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.04971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:31.926454  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:31.926462  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:31.932418  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:31.932944  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:32.023276  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.642635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:32.123497  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.903596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:32.223230  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.749943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:32.323927  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.306122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:32.423440  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.882449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:32.524130  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.499839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
E0919 11:32:32.579869  108424 factory.go:590] Error getting pod permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/test-pod for retry: Get http://127.0.0.1:35423/api/v1/namespaces/permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/pods/test-pod: dial tcp 127.0.0.1:35423: connect: connection refused; retrying...
I0919 11:32:32.623750  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.092343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:32.723770  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.143504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:32.823812  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.214986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:32.919152  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:32.921667  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:32.924329  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.77899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:32.926794  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:32.926844  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:32.933622  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:32.933937  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:33.023830  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.2466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.123871  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.252478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.223483  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.928915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.323682  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.02777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.426000  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.856076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.523353  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.39089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.623706  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.908213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.723702  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.0058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.823468  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.976787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.919340  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:33.921814  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:33.923440  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.946441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:33.927051  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:33.927108  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:33.933843  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:33.934148  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:34.023424  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.904745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.123421  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.863076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.223581  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.069806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.323627  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.991117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.423749  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.09884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.523399  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.858066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.623106  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.648826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.723442  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.933126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.823961  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.353774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.919586  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:34.922062  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:34.923701  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.177543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:34.927199  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:34.927230  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:34.934079  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:34.934415  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:35.023315  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.756549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.123641  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.027425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.223460  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.865185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.323347  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.853653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.423736  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.229517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.522890  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.44215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.623070  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.564293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.723299  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.775026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.823344  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.827559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.919773  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:35.922345  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:35.923879  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.267996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:35.927395  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:35.927406  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:35.934418  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:35.934576  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:36.023797  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.100204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:36.123631  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.123732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:36.223263  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.758316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:36.323406  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.955183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:36.423238  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.823165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:36.523090  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.646568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:36.623253  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.772619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:36.722904  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.486928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:36.823568  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.05116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37166]
I0919 11:32:36.825032  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.58664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:36.826901  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.339187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:36.828688  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.219961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:36.919979  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:36.922606  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:36.923070  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.642036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:36.927578  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:36.927646  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:36.934640  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:36.934795  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:32:37.023588  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (2.098754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:37.026045  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.802508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:37.033312  108424 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (6.688557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:37.036480  108424 httplog.go:90] GET /api/v1/namespaces/node-pid-pressurec210ffe1-294a-4201-83aa-4debd22a8eb8/pods/pidpressure-fake-name: (1.519379ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
E0919 11:32:37.036981  108424 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0919 11:32:37.037515  108424 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30563&timeout=5m14s&timeoutSeconds=314&watch=true: (30.122327709s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36890]
I0919 11:32:37.037587  108424 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30866&timeout=8m55s&timeoutSeconds=535&watch=true: (30.122876293s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0919 11:32:37.037595  108424 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30568&timeout=6m39s&timeoutSeconds=399&watch=true: (30.123168715s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36894]
I0919 11:32:37.037516  108424 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30563&timeout=8m50s&timeoutSeconds=530&watch=true: (30.120085555s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0919 11:32:37.037524  108424 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30563&timeout=7m56s&timeoutSeconds=476&watch=true: (30.120380269s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36902]
I0919 11:32:37.037742  108424 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30568&timeout=8m6s&timeoutSeconds=486&watch=true: (30.122546884s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36884]
I0919 11:32:37.037741  108424 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30563&timeout=5m47s&timeoutSeconds=347&watch=true: (30.122854605s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36722]
I0919 11:32:37.037770  108424 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30563&timeoutSeconds=342&watch=true: (30.224859928s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36676]
I0919 11:32:37.037849  108424 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30568&timeout=6m34s&timeoutSeconds=394&watch=true: (30.122521763s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36886]
I0919 11:32:37.037860  108424 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30568&timeout=9m12s&timeoutSeconds=552&watch=true: (30.122342545s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36888]
I0919 11:32:37.038103  108424 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30568&timeout=5m43s&timeoutSeconds=343&watch=true: (30.124848099s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36882]
I0919 11:32:37.042512  108424 httplog.go:90] DELETE /api/v1/nodes: (4.648763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:37.042812  108424 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 11:32:37.044583  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.439728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0919 11:32:37.047050  108424 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.971668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
--- FAIL: TestNodePIDPressure (34.07s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-112414.xml

Find permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/test-pod mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.23s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W0919 11:34:15.057101  108424 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 11:34:15.057127  108424 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 11:34:15.057141  108424 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 11:34:15.057151  108424 master.go:259] Using reconciler: 
I0919 11:34:15.058522  108424 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.058691  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.058782  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.059697  108424 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 11:34:15.059732  108424 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.059972  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.059990  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.060010  108424 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 11:34:15.061410  108424 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:34:15.061466  108424 watch_cache.go:405] Replace watchCache (rev: 47976) 
I0919 11:34:15.061465  108424 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.061516  108424 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:34:15.061639  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.061821  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.062565  108424 watch_cache.go:405] Replace watchCache (rev: 47976) 
I0919 11:34:15.063454  108424 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 11:34:15.063569  108424 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.063740  108424 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 11:34:15.063891  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.063962  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.065009  108424 watch_cache.go:405] Replace watchCache (rev: 47976) 
I0919 11:34:15.065674  108424 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 11:34:15.065760  108424 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 11:34:15.065866  108424 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.066122  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.066157  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.067952  108424 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 11:34:15.068037  108424 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 11:34:15.068194  108424 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.068377  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.068411  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.069139  108424 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 11:34:15.069162  108424 watch_cache.go:405] Replace watchCache (rev: 47976) 
I0919 11:34:15.069184  108424 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 11:34:15.069336  108424 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.069564  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.069589  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.069955  108424 watch_cache.go:405] Replace watchCache (rev: 47977) 
I0919 11:34:15.070354  108424 watch_cache.go:405] Replace watchCache (rev: 47977) 
I0919 11:34:15.071100  108424 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 11:34:15.071140  108424 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 11:34:15.071487  108424 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.072011  108424 watch_cache.go:405] Replace watchCache (rev: 47977) 
I0919 11:34:15.072045  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.072069  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.072945  108424 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 11:34:15.073050  108424 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 11:34:15.073114  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.073280  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.073305  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.074048  108424 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 11:34:15.074218  108424 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.074263  108424 watch_cache.go:405] Replace watchCache (rev: 47979) 
I0919 11:34:15.074445  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.074465  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.074468  108424 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 11:34:15.075336  108424 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 11:34:15.075418  108424 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 11:34:15.075590  108424 watch_cache.go:405] Replace watchCache (rev: 47979) 
I0919 11:34:15.076030  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.076379  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.076389  108424 watch_cache.go:405] Replace watchCache (rev: 47979) 
I0919 11:34:15.076404  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.077092  108424 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 11:34:15.077151  108424 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 11:34:15.077250  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.077470  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.077492  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.078279  108424 watch_cache.go:405] Replace watchCache (rev: 47980) 
I0919 11:34:15.078541  108424 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 11:34:15.078580  108424 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 11:34:15.078690  108424 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.078790  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.078811  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.079578  108424 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 11:34:15.079720  108424 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 11:34:15.079710  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.079980  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.080000  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.080056  108424 watch_cache.go:405] Replace watchCache (rev: 47980) 
I0919 11:34:15.080531  108424 watch_cache.go:405] Replace watchCache (rev: 47980) 
I0919 11:34:15.080666  108424 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 11:34:15.080689  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.080806  108424 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 11:34:15.080857  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.080874  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.081725  108424 watch_cache.go:405] Replace watchCache (rev: 47980) 
I0919 11:34:15.082011  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.082125  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.082934  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.083075  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.083092  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.085116  108424 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 11:34:15.085280  108424 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 11:34:15.085167  108424 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 11:34:15.086801  108424 watch_cache.go:405] Replace watchCache (rev: 47983) 
I0919 11:34:15.087436  108424 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.087775  108424 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.089539  108424 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.091317  108424 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.093803  108424 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.094875  108424 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.095406  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.095558  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.095796  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.096412  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.097214  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.097728  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.098912  108424 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.099677  108424 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.100310  108424 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.100643  108424 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.101532  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.102008  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.102405  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.102645  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.102972  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.103328  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.103741  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.105151  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.105751  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.106896  108424 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.108479  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.109058  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.109719  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.111071  108424 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.111506  108424 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.112651  108424 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.113964  108424 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.114783  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.116593  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.116995  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.117226  108424 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 11:34:15.117326  108424 master.go:461] Enabling API group "authentication.k8s.io".
I0919 11:34:15.117531  108424 master.go:461] Enabling API group "authorization.k8s.io".
I0919 11:34:15.117793  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.118144  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.118245  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.119412  108424 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:34:15.119620  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.119760  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.119789  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.119896  108424 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:34:15.121073  108424 watch_cache.go:405] Replace watchCache (rev: 47997) 
I0919 11:34:15.121476  108424 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:34:15.121542  108424 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:34:15.121726  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.121947  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.121996  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.122854  108424 watch_cache.go:405] Replace watchCache (rev: 47997) 
I0919 11:34:15.123124  108424 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:34:15.123145  108424 master.go:461] Enabling API group "autoscaling".
I0919 11:34:15.123164  108424 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:34:15.123258  108424 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.123400  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.123419  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.124101  108424 watch_cache.go:405] Replace watchCache (rev: 47999) 
I0919 11:34:15.129585  108424 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 11:34:15.129771  108424 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 11:34:15.129794  108424 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.129961  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.129986  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.131502  108424 watch_cache.go:405] Replace watchCache (rev: 48000) 
I0919 11:34:15.131996  108424 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 11:34:15.132024  108424 master.go:461] Enabling API group "batch".
I0919 11:34:15.132204  108424 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 11:34:15.132223  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.132449  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.132476  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.133720  108424 watch_cache.go:405] Replace watchCache (rev: 48000) 
I0919 11:34:15.133920  108424 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 11:34:15.133945  108424 master.go:461] Enabling API group "certificates.k8s.io".
I0919 11:34:15.134040  108424 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 11:34:15.135075  108424 watch_cache.go:405] Replace watchCache (rev: 48001) 
I0919 11:34:15.135481  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.136082  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.136127  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.137086  108424 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:34:15.137166  108424 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:34:15.137283  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.137449  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.137474  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.138722  108424 watch_cache.go:405] Replace watchCache (rev: 48001) 
I0919 11:34:15.139613  108424 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:34:15.139644  108424 master.go:461] Enabling API group "coordination.k8s.io".
I0919 11:34:15.139662  108424 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 11:34:15.139711  108424 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:34:15.139842  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.140035  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.140068  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.141146  108424 watch_cache.go:405] Replace watchCache (rev: 48002) 
I0919 11:34:15.143262  108424 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:34:15.143297  108424 master.go:461] Enabling API group "extensions".
I0919 11:34:15.143352  108424 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:34:15.143511  108424 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.143720  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.143749  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.144618  108424 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 11:34:15.144702  108424 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 11:34:15.144813  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.144954  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.145111  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.146355  108424 watch_cache.go:405] Replace watchCache (rev: 48003) 
I0919 11:34:15.146837  108424 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:34:15.146866  108424 master.go:461] Enabling API group "networking.k8s.io".
I0919 11:34:15.146905  108424 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.147047  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.147080  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.147156  108424 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:34:15.147186  108424 watch_cache.go:405] Replace watchCache (rev: 48003) 
I0919 11:34:15.148659  108424 watch_cache.go:405] Replace watchCache (rev: 48003) 
I0919 11:34:15.148696  108424 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 11:34:15.148724  108424 master.go:461] Enabling API group "node.k8s.io".
I0919 11:34:15.148855  108424 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 11:34:15.148898  108424 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.149063  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.149091  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.150208  108424 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 11:34:15.150218  108424 watch_cache.go:405] Replace watchCache (rev: 48004) 
I0919 11:34:15.150244  108424 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 11:34:15.150436  108424 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.150601  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.150623  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.151107  108424 watch_cache.go:405] Replace watchCache (rev: 48004) 
I0919 11:34:15.151411  108424 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 11:34:15.151439  108424 master.go:461] Enabling API group "policy".
I0919 11:34:15.151472  108424 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 11:34:15.151469  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.151677  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.151699  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.153101  108424 watch_cache.go:405] Replace watchCache (rev: 48004) 
I0919 11:34:15.154790  108424 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:34:15.154819  108424 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:34:15.155733  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.156802  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.156874  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.158626  108424 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:34:15.158836  108424 watch_cache.go:405] Replace watchCache (rev: 48005) 
I0919 11:34:15.158921  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.159088  108424 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:34:15.159433  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.159601  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.160605  108424 watch_cache.go:405] Replace watchCache (rev: 48006) 
I0919 11:34:15.160711  108424 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:34:15.160811  108424 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:34:15.160943  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.161540  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.161580  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.162020  108424 watch_cache.go:405] Replace watchCache (rev: 48006) 
I0919 11:34:15.162572  108424 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:34:15.162657  108424 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:34:15.162654  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.162930  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.162953  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.163488  108424 watch_cache.go:405] Replace watchCache (rev: 48006) 
I0919 11:34:15.163986  108424 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:34:15.164071  108424 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:34:15.164371  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.164696  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.164728  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.165074  108424 watch_cache.go:405] Replace watchCache (rev: 48006) 
I0919 11:34:15.167621  108424 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:34:15.167723  108424 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:34:15.167894  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.168353  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.168402  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.168842  108424 watch_cache.go:405] Replace watchCache (rev: 48007) 
I0919 11:34:15.170211  108424 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:34:15.170247  108424 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:34:15.170402  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.170539  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.170562  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.171274  108424 watch_cache.go:405] Replace watchCache (rev: 48008) 
I0919 11:34:15.171488  108424 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:34:15.171518  108424 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 11:34:15.171540  108424 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:34:15.173352  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.175669  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.175717  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.175831  108424 watch_cache.go:405] Replace watchCache (rev: 48008) 
I0919 11:34:15.177154  108424 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:34:15.177215  108424 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:34:15.177803  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.178239  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.178461  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.178491  108424 watch_cache.go:405] Replace watchCache (rev: 48009) 
I0919 11:34:15.179910  108424 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:34:15.180036  108424 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 11:34:15.180639  108424 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:34:15.184460  108424 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 11:34:15.184806  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.185166  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.185280  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.184594  108424 watch_cache.go:405] Replace watchCache (rev: 48011) 
I0919 11:34:15.186344  108424 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:34:15.186422  108424 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:34:15.186835  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.190659  108424 watch_cache.go:405] Replace watchCache (rev: 48013) 
I0919 11:34:15.191694  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.191851  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.193190  108424 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:34:15.193256  108424 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:34:15.193248  108424 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.193449  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.193481  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.194475  108424 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 11:34:15.194561  108424 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 11:34:15.194598  108424 watch_cache.go:405] Replace watchCache (rev: 48014) 
I0919 11:34:15.194594  108424 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.194796  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.194815  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.195524  108424 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 11:34:15.195609  108424 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 11:34:15.195747  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.195923  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.195951  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.196058  108424 watch_cache.go:405] Replace watchCache (rev: 48014) 
I0919 11:34:15.196590  108424 watch_cache.go:405] Replace watchCache (rev: 48014) 
I0919 11:34:15.196884  108424 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:34:15.196936  108424 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:34:15.197043  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.197158  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.197178  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.198425  108424 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:34:15.198457  108424 master.go:461] Enabling API group "storage.k8s.io".
I0919 11:34:15.198626  108424 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:34:15.199018  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.199276  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.199302  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.199620  108424 watch_cache.go:405] Replace watchCache (rev: 48014) 
I0919 11:34:15.199621  108424 watch_cache.go:405] Replace watchCache (rev: 48014) 
I0919 11:34:15.200303  108424 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 11:34:15.200437  108424 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 11:34:15.200532  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.200681  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.200696  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.201792  108424 watch_cache.go:405] Replace watchCache (rev: 48014) 
I0919 11:34:15.201859  108424 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 11:34:15.201909  108424 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 11:34:15.202043  108424 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.202195  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.202227  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.203089  108424 watch_cache.go:405] Replace watchCache (rev: 48015) 
I0919 11:34:15.203943  108424 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 11:34:15.204558  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.204693  108424 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 11:34:15.205179  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.205216  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.206300  108424 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 11:34:15.206344  108424 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 11:34:15.206548  108424 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.206789  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.206816  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.207645  108424 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 11:34:15.207692  108424 master.go:461] Enabling API group "apps".
I0919 11:34:15.207729  108424 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 11:34:15.207739  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.207884  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.207914  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.208354  108424 watch_cache.go:405] Replace watchCache (rev: 48016) 
I0919 11:34:15.208974  108424 watch_cache.go:405] Replace watchCache (rev: 48016) 
I0919 11:34:15.209304  108424 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:34:15.209347  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.209509  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.209518  108424 watch_cache.go:405] Replace watchCache (rev: 48016) 
I0919 11:34:15.209537  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.209658  108424 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:34:15.210614  108424 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:34:15.210654  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.210690  108424 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:34:15.210792  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.210810  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.211537  108424 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:34:15.211611  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.211713  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.211727  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.211785  108424 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:34:15.212207  108424 watch_cache.go:405] Replace watchCache (rev: 48017) 
I0919 11:34:15.212226  108424 watch_cache.go:405] Replace watchCache (rev: 48017) 
I0919 11:34:15.212599  108424 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:34:15.212621  108424 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 11:34:15.212663  108424 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.212727  108424 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:34:15.212874  108424 watch_cache.go:405] Replace watchCache (rev: 48017) 
I0919 11:34:15.212942  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:15.212961  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:15.213639  108424 watch_cache.go:405] Replace watchCache (rev: 48017) 
I0919 11:34:15.214345  108424 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:34:15.214390  108424 master.go:461] Enabling API group "events.k8s.io".
I0919 11:34:15.214489  108424 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:34:15.214624  108424 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.214854  108424 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.215153  108424 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.215245  108424 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.215337  108424 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.215443  108424 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.215639  108424 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.215768  108424 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.215878  108424 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.215983  108424 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.216764  108424 watch_cache.go:405] Replace watchCache (rev: 48018) 
I0919 11:34:15.217431  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.217736  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.218990  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.219612  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.220727  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.221251  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.222094  108424 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.222791  108424 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.223536  108424 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.224000  108424 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:34:15.224199  108424 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 11:34:15.225129  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.225491  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.226518  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.228078  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.229623  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.230794  108424 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.231231  108424 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.233529  108424 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.234339  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.234661  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.235576  108424 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:34:15.235780  108424 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 11:34:15.238697  108424 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.239097  108424 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.239675  108424 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.240385  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.242925  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.243744  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.245231  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.245973  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.246827  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.247686  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.248569  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:34:15.248757  108424 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 11:34:15.249524  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.250933  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:34:15.251149  108424 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 11:34:15.252034  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.253110  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.253723  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.254554  108424 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.255229  108424 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.255866  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.256643  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:34:15.256831  108424 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 11:34:15.258297  108424 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.259173  108424 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.259623  108424 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.260454  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.260977  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.261321  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.262071  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.262312  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.262742  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.263647  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.264032  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.264513  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:34:15.264702  108424 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 11:34:15.264783  108424 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 11:34:15.265827  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.266487  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.267241  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.267864  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.269416  108424 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ec463d6a-1637-4f4c-b464-72c92b5e840f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:34:15.273599  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.273635  108424 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 11:34:15.273650  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.273660  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.273668  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.273676  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.273717  108424 httplog.go:90] GET /healthz: (398.303µs) 0 [Go-http-client/1.1 127.0.0.1:58848]
I0919 11:34:15.274773  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.45099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58850]
I0919 11:34:15.277601  108424 httplog.go:90] GET /api/v1/services: (1.04346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58850]
I0919 11:34:15.281355  108424 httplog.go:90] GET /api/v1/services: (1.147369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58850]
I0919 11:34:15.283666  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.283707  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.283750  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.283760  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.283769  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.283885  108424 httplog.go:90] GET /healthz: (272.13µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58848]
I0919 11:34:15.284928  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.435656ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58850]
I0919 11:34:15.285814  108424 httplog.go:90] GET /api/v1/services: (1.531044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58848]
I0919 11:34:15.287003  108424 httplog.go:90] GET /api/v1/services: (2.219304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.287451  108424 httplog.go:90] POST /api/v1/namespaces: (2.081023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58850]
I0919 11:34:15.288934  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.049193ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.294826  108424 httplog.go:90] POST /api/v1/namespaces: (5.498154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.296868  108424 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.268718ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.299057  108424 httplog.go:90] POST /api/v1/namespaces: (1.83257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.374767  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.374808  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.374820  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.374828  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.374834  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.374869  108424 httplog.go:90] GET /healthz: (248.463µs) 0 [Go-http-client/1.1 127.0.0.1:58852]
I0919 11:34:15.384824  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.384860  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.384874  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.384883  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.384895  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.384940  108424 httplog.go:90] GET /healthz: (281.423µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.474545  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.474582  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.474596  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.474619  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.474634  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.474673  108424 httplog.go:90] GET /healthz: (331.637µs) 0 [Go-http-client/1.1 127.0.0.1:58852]
I0919 11:34:15.484815  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.484850  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.484860  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.484867  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.484873  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.484903  108424 httplog.go:90] GET /healthz: (321.842µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.574589  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.574626  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.574636  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.574643  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.574649  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.574679  108424 httplog.go:90] GET /healthz: (238.07µs) 0 [Go-http-client/1.1 127.0.0.1:58852]
I0919 11:34:15.584896  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.584959  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.584973  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.584984  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.584993  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.585029  108424 httplog.go:90] GET /healthz: (303.846µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.674718  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.674764  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.674777  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.674786  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.674795  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.674844  108424 httplog.go:90] GET /healthz: (275.995µs) 0 [Go-http-client/1.1 127.0.0.1:58852]
I0919 11:34:15.684829  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.684869  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.684880  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.684889  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.684926  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.684964  108424 httplog.go:90] GET /healthz: (377.156µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.774667  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.775077  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.775192  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.775304  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.775403  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.776225  108424 httplog.go:90] GET /healthz: (1.722478ms) 0 [Go-http-client/1.1 127.0.0.1:58852]
I0919 11:34:15.784845  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.784889  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.784903  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.784913  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.784922  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.784956  108424 httplog.go:90] GET /healthz: (276.627µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.874584  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.874629  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.874654  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.874663  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.874669  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.874707  108424 httplog.go:90] GET /healthz: (310.333µs) 0 [Go-http-client/1.1 127.0.0.1:58852]
I0919 11:34:15.884875  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.884913  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.884923  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.884929  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.884935  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.884959  108424 httplog.go:90] GET /healthz: (230.034µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:15.974522  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.974549  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.974567  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.974574  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.974579  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.974615  108424 httplog.go:90] GET /healthz: (259.859µs) 0 [Go-http-client/1.1 127.0.0.1:58852]
I0919 11:34:15.984804  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:34:15.984843  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:15.984856  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:15.984866  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:15.984874  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:15.984905  108424 httplog.go:90] GET /healthz: (265.258µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:16.057494  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:34:16.057617  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:34:16.075790  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.075830  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:16.075841  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:16.075850  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:16.075904  108424 httplog.go:90] GET /healthz: (1.409616ms) 0 [Go-http-client/1.1 127.0.0.1:58852]
I0919 11:34:16.086487  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.086588  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:16.086600  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:16.086609  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:16.086667  108424 httplog.go:90] GET /healthz: (1.905048ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:16.175642  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.175682  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:16.175695  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:16.175705  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:16.175750  108424 httplog.go:90] GET /healthz: (1.380762ms) 0 [Go-http-client/1.1 127.0.0.1:58852]
I0919 11:34:16.186227  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.186263  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:16.186286  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:16.186294  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:16.186349  108424 httplog.go:90] GET /healthz: (1.610839ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:16.275043  108424 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.531077ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:16.275057  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.358861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58848]
I0919 11:34:16.275247  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.275272  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:34:16.275282  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:34:16.275291  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:34:16.275321  108424 httplog.go:90] GET /healthz: (754.932µs) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:16.275767  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.95504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.277442  108424 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.264368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.277541  108424 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.028121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58852]
I0919 11:34:16.277762  108424 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 11:34:16.279109  108424 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.14402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.279515  108424 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.713707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.281199  108424 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.366582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.281354  108424 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 11:34:16.281402  108424 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 11:34:16.282224  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.7675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58848]
I0919 11:34:16.283572  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (885.451µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.284515  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (701.66µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.285133  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.285149  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.285170  108424 httplog.go:90] GET /healthz: (702.42µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.285558  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (768.335µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.286431  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (584.843µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.287428  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (712.327µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.288418  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (723.857µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.289470  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (735.565µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.291209  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.292959ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.293480  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.840748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.293693  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 11:34:16.296135  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.695781ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.300491  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.94121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.300881  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 11:34:16.304196  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.377511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.306426  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.725823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.306701  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 11:34:16.308052  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (923.179µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.310215  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.353455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.310552  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:34:16.311744  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (933.485µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.314269  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.979586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.314649  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 11:34:16.315747  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (792.049µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.318054  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.425557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.318301  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 11:34:16.319233  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (748.564µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.321143  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.552097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.321616  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 11:34:16.322635  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (837.449µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.324797  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.815367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.325256  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 11:34:16.326240  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (733.922µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.328279  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.513833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.328743  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 11:34:16.329718  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (744.16µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.331969  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.669741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.332270  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 11:34:16.333117  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (649.175µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.334845  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.198668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.335050  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 11:34:16.336526  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.263035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.338789  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.866328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.339178  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 11:34:16.340235  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (802.691µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.342179  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.532657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.342479  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 11:34:16.343828  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.046902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.345942  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.452395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.346152  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 11:34:16.347331  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (929.577µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.349354  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.556892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.349634  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 11:34:16.350758  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (904.646µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.353174  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.863169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.353455  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 11:34:16.354589  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (819.222µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.356245  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.308702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.356468  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 11:34:16.357536  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (800.471µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.359671  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.712883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.359940  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:34:16.360994  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (838.39µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.363244  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.80415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.363506  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 11:34:16.365167  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.426833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.367561  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.874495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.367810  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 11:34:16.369204  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.147754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.371185  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.522007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.371432  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 11:34:16.372420  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (749.644µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.374520  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.532301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.374716  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 11:34:16.375352  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.375881  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.376249  108424 httplog.go:90] GET /healthz: (1.785277ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:16.375630  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (718.001µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.379063  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.084088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.379492  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 11:34:16.380661  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (963.712µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.383117  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.81493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.383533  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:34:16.385120  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.190583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.385619  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.385649  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.385688  108424 httplog.go:90] GET /healthz: (1.177181ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.387531  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.387884  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 11:34:16.389181  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.088913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.392257  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.3834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.392706  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:34:16.394860  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.846313ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.397712  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.418514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.398981  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 11:34:16.401030  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.752087ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.404120  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.534069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.404496  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:34:16.406608  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.71612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.408997  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.852492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.409412  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:34:16.410913  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.110604ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.413496  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.985158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.413827  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:34:16.415141  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.065322ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.417643  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.803681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.417881  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:34:16.419184  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (993.751µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.421616  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.866377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.421856  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:34:16.422933  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (900.484µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.425060  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.75096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.425258  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:34:16.426353  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (833.983µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.428309  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.584114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.428663  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:34:16.429784  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (893.763µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.431598  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.420032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.431798  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:34:16.433180  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.183264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.436408  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.709477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.436657  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:34:16.438064  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.165143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.440345  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.721186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.440631  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:34:16.441998  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.032023ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.444631  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.140209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.444906  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:34:16.446064  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (936.24µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.449749  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.976513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.450064  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:34:16.451910  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.213818ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.454907  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.365246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.455213  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:34:16.457118  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.622251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.460118  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.296148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.460382  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:34:16.461612  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.004379ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.463643  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.612384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.463873  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:34:16.465292  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.143088ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.467850  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.967304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.468110  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:34:16.469995  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.742861ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.474120  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.137194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.474441  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:34:16.475580  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.475606  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (940.352µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.475632  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.475662  108424 httplog.go:90] GET /healthz: (1.327532ms) 0 [Go-http-client/1.1 127.0.0.1:59220]
I0919 11:34:16.478427  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.375892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.478925  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:34:16.480089  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (914.502µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.482726  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.14889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.482940  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:34:16.485641  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (2.351568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.485642  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.486043  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.486501  108424 httplog.go:90] GET /healthz: (1.775118ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.488631  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.983482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.489062  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:34:16.494492  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (5.084464ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.497522  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.373766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.497874  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:34:16.499165  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (987.582µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.501825  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.997934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.502094  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:34:16.503320  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.051432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.505764  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.961757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.506035  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:34:16.507716  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.362061ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.510189  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.008111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.510613  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:34:16.514595  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.034804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.537141  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.366343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.537575  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:34:16.555923  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (2.138698ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.575709  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.575756  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.575800  108424 httplog.go:90] GET /healthz: (1.539716ms) 0 [Go-http-client/1.1 127.0.0.1:59220]
I0919 11:34:16.576328  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.561435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.576611  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:34:16.586086  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.586131  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.586179  108424 httplog.go:90] GET /healthz: (1.589131ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.595341  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.459276ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.616760  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.966363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.617051  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 11:34:16.635661  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.877873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.656527  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.754408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.656844  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 11:34:16.675910  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.105636ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.675947  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.675972  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.676050  108424 httplog.go:90] GET /healthz: (1.753449ms) 0 [Go-http-client/1.1 127.0.0.1:59220]
I0919 11:34:16.686050  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.686089  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.686152  108424 httplog.go:90] GET /healthz: (1.445055ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.700840  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.06153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.701124  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 11:34:16.715188  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.43743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.736505  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.736577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.736975  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:34:16.755789  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (2.018649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.775301  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.775349  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.775420  108424 httplog.go:90] GET /healthz: (1.1019ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:16.776295  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.5818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.776555  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 11:34:16.786274  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.786313  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.786406  108424 httplog.go:90] GET /healthz: (1.615784ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.798269  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (4.465803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.816004  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.197907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.816761  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:34:16.835073  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.240694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.856253  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.337903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.856877  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 11:34:16.875976  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.876196  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.876127  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.349123ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.876425  108424 httplog.go:90] GET /healthz: (1.30324ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:16.886204  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.886504  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.886718  108424 httplog.go:90] GET /healthz: (1.960373ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.896042  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.184429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.896312  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:34:16.915581  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.651079ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.937919  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.015548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.938204  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:34:16.956807  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.918922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:16.979893  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.979936  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.979982  108424 httplog.go:90] GET /healthz: (2.667523ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:16.980519  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.16635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.980973  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 11:34:16.985560  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:16.985596  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:16.985636  108424 httplog.go:90] GET /healthz: (1.125564ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:16.998073  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (4.281415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.016100  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.334644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.016621  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:34:17.035110  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.346945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.056245  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.444052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.056606  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:34:17.075310  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.554307ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.075318  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.075350  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.075399  108424 httplog.go:90] GET /healthz: (1.087753ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:17.085552  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.085601  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.085657  108424 httplog.go:90] GET /healthz: (1.042157ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.096035  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.299701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.096497  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:34:17.115490  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.761544ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.136008  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.191946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.136248  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:34:17.155283  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.493254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.175472  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.175512  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.175551  108424 httplog.go:90] GET /healthz: (1.298878ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:17.176176  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.379664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.176463  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:34:17.185525  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.185556  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.185590  108424 httplog.go:90] GET /healthz: (1.073196ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.194812  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.129345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.216182  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.277485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.216470  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:34:17.235803  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (2.022979ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.255957  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.175226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.256536  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:34:17.275263  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.405097ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.275352  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.275420  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.275455  108424 httplog.go:90] GET /healthz: (1.02841ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:17.285902  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.285944  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.285999  108424 httplog.go:90] GET /healthz: (1.278947ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.296307  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.395575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.296667  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:34:17.315488  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.70174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.335688  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.937778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.336147  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:34:17.355902  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.360829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.375547  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.375586  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.375628  108424 httplog.go:90] GET /healthz: (1.307147ms) 0 [Go-http-client/1.1 127.0.0.1:59220]
I0919 11:34:17.376598  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.734866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.376965  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:34:17.385722  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.385763  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.385806  108424 httplog.go:90] GET /healthz: (1.1479ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.399630  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.431697ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.416323  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.448595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.416594  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:34:17.435330  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.505613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.455920  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.180867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.456166  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:34:17.475897  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.046382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.475929  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.475951  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.475985  108424 httplog.go:90] GET /healthz: (1.731333ms) 0 [Go-http-client/1.1 127.0.0.1:59220]
I0919 11:34:17.485683  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.485715  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.485760  108424 httplog.go:90] GET /healthz: (1.112866ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.495936  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.102656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.496235  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:34:17.515428  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.652518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.536183  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.415097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.536465  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:34:17.555447  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.631552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.576008  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.576043  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.576093  108424 httplog.go:90] GET /healthz: (1.786493ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:17.576504  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.799246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.576765  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:34:17.585513  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.585746  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.585984  108424 httplog.go:90] GET /healthz: (1.356112ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.596532  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.324592ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.616822  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.237764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.617159  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:34:17.635289  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.556745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.656313  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.513507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.656634  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:34:17.675181  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.398554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.675689  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.675850  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.676100  108424 httplog.go:90] GET /healthz: (1.749506ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:17.685729  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.685908  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.686125  108424 httplog.go:90] GET /healthz: (1.509324ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.698980  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.265282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.699219  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:34:17.715194  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.386288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.736071  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.285323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.736446  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:34:17.755510  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.73887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.775629  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.775826  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.776017  108424 httplog.go:90] GET /healthz: (1.730886ms) 0 [Go-http-client/1.1 127.0.0.1:59220]
I0919 11:34:17.776081  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.314302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.776399  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:34:17.785924  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.785965  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.786004  108424 httplog.go:90] GET /healthz: (1.296007ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.794977  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.253308ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.816471  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.559396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.816883  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:34:17.834864  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.083139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.864040  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (10.175661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.864315  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:34:17.875338  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.875490  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.875530  108424 httplog.go:90] GET /healthz: (1.305394ms) 0 [Go-http-client/1.1 127.0.0.1:59220]
I0919 11:34:17.875541  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.700509ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.885578  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.885608  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.885652  108424 httplog.go:90] GET /healthz: (1.024133ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.900495  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.253609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.900757  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:34:17.915644  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.726527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.939007  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.257636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.939451  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:34:17.960028  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (6.157315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.975875  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.976565  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.976774  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.006793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:17.976930  108424 httplog.go:90] GET /healthz: (2.600904ms) 0 [Go-http-client/1.1 127.0.0.1:59220]
I0919 11:34:17.977133  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:34:17.988175  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:17.988225  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:17.988282  108424 httplog.go:90] GET /healthz: (3.345548ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:17.996529  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.453453ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.016182  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.377963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.016499  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:34:18.035226  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.542613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.039753  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.834694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.055939  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.193634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.056195  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 11:34:18.075581  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.075617  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.075633  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.845062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.075673  108424 httplog.go:90] GET /healthz: (1.03346ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:18.077714  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.427306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.086062  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.086303  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.086648  108424 httplog.go:90] GET /healthz: (1.909453ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.097480  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.129433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.097762  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:34:18.115012  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.30563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.116983  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.265305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.137907  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.429745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.138257  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:34:18.155158  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.42441ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.157063  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.271445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.175315  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.176480  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.176773  108424 httplog.go:90] GET /healthz: (2.498598ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:18.176377  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.517089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.177266  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:34:18.185812  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.186167  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.186476  108424 httplog.go:90] GET /healthz: (1.741118ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.195510  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.698906ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.198011  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.423595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.216221  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.444214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.216589  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:34:18.235195  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.448103ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.237162  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.409729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.256084  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.149025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.256325  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:34:18.275060  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.260475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.275304  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.275332  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.275417  108424 httplog.go:90] GET /healthz: (1.06254ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:18.276995  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.092502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.285688  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.285724  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.285778  108424 httplog.go:90] GET /healthz: (1.206936ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.297204  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.696992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.297488  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:34:18.315670  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.885531ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.317392  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.120496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.336047  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.209831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.336313  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 11:34:18.354960  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.142898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.356684  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.256303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.375684  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.375740  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.375806  108424 httplog.go:90] GET /healthz: (1.53474ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:18.376047  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.169707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.376261  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:34:18.385453  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.385490  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.385555  108424 httplog.go:90] GET /healthz: (949.315µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.395195  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.361596ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.396977  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.256949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.416534  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.735853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.416860  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:34:18.435248  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.257105ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.437147  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.237907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.455881  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.206795ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.456138  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:34:18.475181  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.453298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.475647  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.475775  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.476284  108424 httplog.go:90] GET /healthz: (1.816032ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:18.477638  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.812201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.485788  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.485829  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.485908  108424 httplog.go:90] GET /healthz: (1.33953ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.495966  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.206637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.496211  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:34:18.515153  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.445983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.517550  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.58168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.539352  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.659301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.539730  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:34:18.555132  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.449756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.556959  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.356155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.576107  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.783498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59220]
I0919 11:34:18.577849  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:34:18.577889  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:34:18.577934  108424 httplog.go:90] GET /healthz: (3.625881ms) 0 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:18.578010  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:34:18.585319  108424 httplog.go:90] GET /healthz: (731.775µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.586923  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.063169ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.596097  108424 httplog.go:90] POST /api/v1/namespaces: (8.447098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.598437  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.762599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.606719  108424 httplog.go:90] POST /api/v1/namespaces/default/services: (5.031988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.610780  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.318625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.611871  108424 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (676.311µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
E0919 11:34:18.612865  108424 controller.go:224] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0919 11:34:18.675686  108424 httplog.go:90] GET /healthz: (1.314369ms) 200 [Go-http-client/1.1 127.0.0.1:59222]
I0919 11:34:18.679701  108424 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.339186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
W0919 11:34:18.680192  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680250  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680263  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680296  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680307  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680323  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680333  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680345  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680356  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680383  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:34:18.680433  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:34:18.681952  108424 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.31101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.683427  108424 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 11:34:18.683506  108424 factory.go:321] Registering predicate: PredicateOne
I0919 11:34:18.683517  108424 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 11:34:18.683524  108424 factory.go:321] Registering predicate: PredicateTwo
I0919 11:34:18.683530  108424 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 11:34:18.683537  108424 factory.go:336] Registering priority: PriorityOne
I0919 11:34:18.683546  108424 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 11:34:18.683558  108424 factory.go:336] Registering priority: PriorityTwo
I0919 11:34:18.683564  108424 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 11:34:18.683571  108424 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 11:34:18.686298  108424 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.305634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
W0919 11:34:18.686768  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:34:18.688636  108424 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (1.307003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.688879  108424 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 11:34:18.688900  108424 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 11:34:18.688908  108424 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 11:34:18.688913  108424 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 11:34:18.694395  108424 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (5.037684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
W0919 11:34:18.694872  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:34:18.699203  108424 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (3.961502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.699670  108424 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 11:34:18.699916  108424 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 11:34:18.701968  108424 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.467992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
W0919 11:34:18.702514  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:34:18.703740  108424 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (970.84µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.704212  108424 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 11:34:18.704353  108424 factory.go:321] Registering predicate: PredicateOne
I0919 11:34:18.704414  108424 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 11:34:18.704449  108424 factory.go:321] Registering predicate: PredicateTwo
I0919 11:34:18.704473  108424 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 11:34:18.704496  108424 factory.go:336] Registering priority: PriorityOne
I0919 11:34:18.704599  108424 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 11:34:18.704645  108424 factory.go:336] Registering priority: PriorityTwo
I0919 11:34:18.704674  108424 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 11:34:18.704703  108424 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 11:34:18.706584  108424 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.526393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
W0919 11:34:18.706969  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:34:18.708527  108424 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (1.093376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:18.708945  108424 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 11:34:18.708972  108424 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 11:34:18.708983  108424 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 11:34:18.708989  108424 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 11:34:18.877170  108424 request.go:538] Throttling request took 167.928928ms, request: POST:http://127.0.0.1:44563/api/v1/namespaces/kube-system/configmaps
I0919 11:34:18.879858  108424 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.326605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
W0919 11:34:18.880251  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:34:19.077161  108424 request.go:538] Throttling request took 196.494814ms, request: GET:http://127.0.0.1:44563/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I0919 11:34:19.079247  108424 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (1.75179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:19.079802  108424 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 11:34:19.080762  108424 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 11:34:19.277157  108424 request.go:538] Throttling request took 196.099384ms, request: DELETE:http://127.0.0.1:44563/api/v1/nodes
I0919 11:34:19.279273  108424 httplog.go:90] DELETE /api/v1/nodes: (1.767839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
I0919 11:34:19.279556  108424 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 11:34:19.281263  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.397674ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59222]
--- FAIL: TestSchedulerCreationFromConfigMap (4.23s)
    scheduler_test.go:283: Expected predicates map[PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:283: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:283: Expected predicates map[PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:283: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-112414.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m20s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0919 11:35:10.760483  108424 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (140.43s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-112414.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0919 11:36:21.113333  108424 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 11:36:21.113388  108424 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 11:36:21.113408  108424 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 11:36:21.113421  108424 master.go:259] Using reconciler: 
I0919 11:36:21.115671  108424 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.116275  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.116428  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.117516  108424 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 11:36:21.117575  108424 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 11:36:21.117560  108424 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.117977  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.118083  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.118833  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.118882  108424 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:36:21.118901  108424 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:36:21.119027  108424 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.119213  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.119239  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.120108  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.120282  108424 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 11:36:21.120325  108424 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 11:36:21.120543  108424 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.121125  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.121236  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.121289  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.122196  108424 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 11:36:21.122265  108424 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 11:36:21.122441  108424 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.122608  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.122632  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.123071  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.123355  108424 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 11:36:21.123418  108424 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 11:36:21.123583  108424 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.123702  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.123731  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.124277  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.124311  108424 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 11:36:21.124374  108424 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 11:36:21.124814  108424 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.124963  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.124989  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.125730  108424 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 11:36:21.125746  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.125794  108424 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 11:36:21.126285  108424 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.126466  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.126543  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.127107  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.127296  108424 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 11:36:21.127563  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.127772  108424 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 11:36:21.127986  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.128270  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.128350  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.129302  108424 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 11:36:21.129446  108424 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 11:36:21.129762  108424 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.129868  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.129888  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.130310  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.130610  108424 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 11:36:21.130639  108424 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 11:36:21.130748  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.130837  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.130862  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.131331  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.131655  108424 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 11:36:21.131683  108424 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 11:36:21.132393  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.132570  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.132738  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.132765  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.133303  108424 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 11:36:21.133455  108424 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 11:36:21.133547  108424 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.133704  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.133727  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.134350  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.134907  108424 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 11:36:21.135029  108424 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 11:36:21.135121  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.135252  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.135277  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.136114  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.136298  108424 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 11:36:21.136409  108424 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 11:36:21.136876  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.137064  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.137089  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.137227  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.137811  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.137835  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.138804  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.138905  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.138923  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.139885  108424 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 11:36:21.139916  108424 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 11:36:21.139945  108424 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 11:36:21.140465  108424 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.140741  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.140735  108424 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.141633  108424 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.142258  108424 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.142892  108424 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.143557  108424 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.144083  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.144183  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.144377  108424 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.144833  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.145304  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.145548  108424 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.146185  108424 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.146413  108424 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.146972  108424 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.147165  108424 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.147789  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.147939  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.148029  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.148120  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.148233  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.148328  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.148498  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.149072  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.149277  108424 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.150134  108424 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.150806  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.151016  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.151273  108424 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.152053  108424 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.152342  108424 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.152991  108424 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.153629  108424 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.154146  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.155090  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.155336  108424 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.155481  108424 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 11:36:21.155506  108424 master.go:461] Enabling API group "authentication.k8s.io".
I0919 11:36:21.155517  108424 master.go:461] Enabling API group "authorization.k8s.io".
I0919 11:36:21.155676  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.155822  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.155847  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.156689  108424 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:36:21.156746  108424 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:36:21.156976  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.157151  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.157180  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.158072  108424 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:36:21.158151  108424 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:36:21.158190  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.158330  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.158507  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.158538  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.159153  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.159231  108424 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 11:36:21.159251  108424 master.go:461] Enabling API group "autoscaling".
I0919 11:36:21.159297  108424 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 11:36:21.159491  108424 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.159610  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.159664  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.160289  108424 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 11:36:21.160444  108424 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 11:36:21.160570  108424 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.160609  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.160708  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.160738  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.161254  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.161516  108424 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 11:36:21.161548  108424 master.go:461] Enabling API group "batch".
I0919 11:36:21.161551  108424 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 11:36:21.161692  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.161813  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.161826  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.162669  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.162686  108424 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 11:36:21.162709  108424 master.go:461] Enabling API group "certificates.k8s.io".
I0919 11:36:21.162766  108424 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 11:36:21.162920  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.163038  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.163063  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.163990  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.164000  108424 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:36:21.164058  108424 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:36:21.164207  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.164392  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.164417  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.165218  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.165420  108424 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 11:36:21.165449  108424 master.go:461] Enabling API group "coordination.k8s.io".
I0919 11:36:21.165466  108424 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 11:36:21.165475  108424 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 11:36:21.165670  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.165794  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.165811  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.166655  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.166776  108424 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:36:21.166809  108424 master.go:461] Enabling API group "extensions".
I0919 11:36:21.166849  108424 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:36:21.167071  108424 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.167216  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.167284  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.167949  108424 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 11:36:21.167986  108424 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 11:36:21.168187  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.168213  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.168413  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.168444  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.169395  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.169448  108424 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 11:36:21.169468  108424 master.go:461] Enabling API group "networking.k8s.io".
I0919 11:36:21.169483  108424 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 11:36:21.169504  108424 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.169936  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.169966  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.170695  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.170776  108424 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 11:36:21.170791  108424 master.go:461] Enabling API group "node.k8s.io".
I0919 11:36:21.170862  108424 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 11:36:21.171018  108424 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.171162  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.171179  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.171857  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.171980  108424 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 11:36:21.172055  108424 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 11:36:21.172132  108424 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.172270  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.172284  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.173122  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.173240  108424 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 11:36:21.173301  108424 master.go:461] Enabling API group "policy".
I0919 11:36:21.173349  108424 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 11:36:21.173338  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.173540  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.173560  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.174256  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.174822  108424 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:36:21.174847  108424 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:36:21.175210  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.175379  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.175403  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.175678  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.176035  108424 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:36:21.176067  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.176168  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.176197  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.176229  108424 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:36:21.176787  108424 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:36:21.176879  108424 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:36:21.177008  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.177133  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.177145  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.177159  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.178010  108424 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:36:21.178044  108424 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:36:21.178067  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.178275  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.178306  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.178413  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.178913  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.179230  108424 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 11:36:21.179442  108424 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 11:36:21.179443  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.180335  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.180413  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.180594  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.181119  108424 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 11:36:21.181148  108424 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 11:36:21.181172  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.181296  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.181343  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.182025  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.182152  108424 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 11:36:21.182266  108424 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 11:36:21.182374  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.182552  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.182594  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.183402  108424 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 11:36:21.183440  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.183442  108424 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 11:36:21.183599  108424 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 11:36:21.184385  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.186137  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.186343  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.186420  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.187038  108424 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:36:21.187068  108424 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:36:21.187242  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.187440  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.187469  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.187852  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.188133  108424 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 11:36:21.188165  108424 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 11:36:21.188180  108424 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 11:36:21.188333  108424 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 11:36:21.188549  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.188724  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.188759  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.189430  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.189758  108424 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:36:21.189818  108424 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:36:21.189974  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.190170  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.190196  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.190658  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.191246  108424 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:36:21.191283  108424 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:36:21.191294  108424 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.191499  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.191537  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.192249  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.192250  108424 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 11:36:21.192266  108424 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 11:36:21.192346  108424 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.192507  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.193078  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.193292  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.193899  108424 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 11:36:21.193964  108424 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 11:36:21.194139  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.194237  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.194251  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.194602  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.194976  108424 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 11:36:21.195022  108424 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 11:36:21.195160  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.195284  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.195310  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.196102  108424 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 11:36:21.196124  108424 master.go:461] Enabling API group "storage.k8s.io".
I0919 11:36:21.196268  108424 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 11:36:21.196664  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.196764  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.196777  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.197278  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.197303  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.197435  108424 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 11:36:21.197624  108424 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 11:36:21.198243  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.198441  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.198470  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.198541  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.199291  108424 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 11:36:21.199337  108424 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 11:36:21.199502  108424 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.199636  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.199668  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.200295  108424 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 11:36:21.200308  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.200373  108424 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 11:36:21.200550  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.200695  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.200716  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.201250  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.201591  108424 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 11:36:21.201742  108424 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 11:36:21.202493  108424 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.202833  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.202859  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.202915  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.204104  108424 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 11:36:21.204131  108424 master.go:461] Enabling API group "apps".
I0919 11:36:21.204172  108424 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 11:36:21.204169  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.204286  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.204304  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.205318  108424 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:36:21.205354  108424 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:36:21.205457  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.205481  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.206245  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.206402  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.206425  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.207265  108424 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:36:21.207325  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.207381  108424 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:36:21.207479  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.207505  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.208062  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.208655  108424 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 11:36:21.208700  108424 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 11:36:21.208696  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.208841  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.208853  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.209465  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.209861  108424 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 11:36:21.209887  108424 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 11:36:21.209919  108424 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 11:36:21.209920  108424 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.210256  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:21.210278  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:21.210711  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.211056  108424 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 11:36:21.211074  108424 master.go:461] Enabling API group "events.k8s.io".
I0919 11:36:21.211163  108424 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 11:36:21.211285  108424 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.211563  108424 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.211824  108424 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.211839  108424 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 11:36:21.211995  108424 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.212184  108424 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.212317  108424 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.212610  108424 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.212712  108424 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.212836  108424 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.213000  108424 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.213931  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.214259  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.215535  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.215807  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.216580  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.216921  108424 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.217687  108424 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.218261  108424 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.218987  108424 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.219400  108424 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:36:21.219469  108424 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 11:36:21.220235  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.220429  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.220692  108424 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.221787  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.222548  108424 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.223233  108424 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.223560  108424 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.224662  108424 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.225326  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.225621  108424 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.226209  108424 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:36:21.226269  108424 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 11:36:21.226965  108424 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.227210  108424 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.228016  108424 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.228614  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.229098  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.229717  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.230267  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.231144  108424 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.231768  108424 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.232572  108424 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.233426  108424 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:36:21.233536  108424 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 11:36:21.234802  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.235534  108424 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:36:21.235598  108424 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 11:36:21.236353  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.237114  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.237489  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.238468  108424 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.239105  108424 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.240118  108424 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.240743  108424 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:36:21.240802  108424 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 11:36:21.241744  108424 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.242477  108424 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.242836  108424 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.243828  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.244085  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.244435  108424 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.245048  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.245409  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.245683  108424 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.246593  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.246841  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.247111  108424 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 11:36:21.247165  108424 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 11:36:21.247170  108424 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 11:36:21.247800  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.248521  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.249414  108424 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.249988  108424 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.250843  108424 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fe5d4548-d1e4-4a00-9fbf-4fe85b860a84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 11:36:21.254136  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.254171  108424 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 11:36:21.254182  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.254193  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.254202  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.254210  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.254241  108424 httplog.go:90] GET /healthz: (230.073µs) 0 [Go-http-client/1.1 127.0.0.1:40704]
I0919 11:36:21.255673  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.53526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:21.258111  108424 httplog.go:90] GET /api/v1/services: (1.085278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:21.262295  108424 httplog.go:90] GET /api/v1/services: (991.785µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:21.264336  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.264388  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.264402  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.264409  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.264416  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.264449  108424 httplog.go:90] GET /healthz: (260.923µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:21.265639  108424 httplog.go:90] GET /api/v1/services: (950.612µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:21.265963  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.78075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40704]
I0919 11:36:21.266193  108424 httplog.go:90] GET /api/v1/services: (1.266612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.268268  108424 httplog.go:90] POST /api/v1/namespaces: (1.881862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40704]
I0919 11:36:21.269821  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.141511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.271725  108424 httplog.go:90] POST /api/v1/namespaces: (1.419877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.272893  108424 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (767.36µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.274541  108424 httplog.go:90] POST /api/v1/namespaces: (1.274544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.355184  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.355222  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.355232  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.355238  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.355254  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.355285  108424 httplog.go:90] GET /healthz: (289.391µs) 0 [Go-http-client/1.1 127.0.0.1:40710]
I0919 11:36:21.365460  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.365502  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.365512  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.365519  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.365526  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.365563  108424 httplog.go:90] GET /healthz: (332.121µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.455182  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.455224  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.455234  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.455240  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.455246  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.455280  108424 httplog.go:90] GET /healthz: (264.855µs) 0 [Go-http-client/1.1 127.0.0.1:40710]
I0919 11:36:21.465452  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.465515  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.465529  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.465541  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.465551  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.465622  108424 httplog.go:90] GET /healthz: (442.28µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.512917  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.512939  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.512920  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.512948  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.512985  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.513013  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.555274  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.555309  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.555320  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.555326  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.555333  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.555388  108424 httplog.go:90] GET /healthz: (279.456µs) 0 [Go-http-client/1.1 127.0.0.1:40710]
I0919 11:36:21.565474  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.565513  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.565523  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.565529  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.565535  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.565619  108424 httplog.go:90] GET /healthz: (313.681µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.655155  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.655200  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.655215  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.655225  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.655234  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.655279  108424 httplog.go:90] GET /healthz: (271.464µs) 0 [Go-http-client/1.1 127.0.0.1:40710]
I0919 11:36:21.659319  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.659350  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.659740  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.659786  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.660103  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.660389  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.665331  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.665418  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.665432  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.665440  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.665448  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.665548  108424 httplog.go:90] GET /healthz: (369.338µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.693194  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.693259  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.693280  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.693426  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.693648  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.693765  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.694415  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.714272  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.755232  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.755301  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.755318  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.755328  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.755336  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.755445  108424 httplog.go:90] GET /healthz: (410.595µs) 0 [Go-http-client/1.1 127.0.0.1:40710]
I0919 11:36:21.765331  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.765536  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.765568  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.765606  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.765638  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.765759  108424 httplog.go:90] GET /healthz: (627.157µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.855252  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.855508  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.855557  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.855584  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.855616  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.855729  108424 httplog.go:90] GET /healthz: (648.05µs) 0 [Go-http-client/1.1 127.0.0.1:40710]
I0919 11:36:21.865333  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.865435  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.865449  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.865459  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.865467  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.865523  108424 httplog.go:90] GET /healthz: (451.317µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:21.867942  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:21.955268  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.955311  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.955321  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.955327  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.955334  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.955445  108424 httplog.go:90] GET /healthz: (407.776µs) 0 [Go-http-client/1.1 127.0.0.1:40710]
I0919 11:36:21.965270  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:21.965478  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:21.965554  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:21.965597  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:21.965637  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:21.965776  108424 httplog.go:90] GET /healthz: (683.327µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:22.055213  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:22.055253  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.055263  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:22.055270  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:22.055277  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:22.055309  108424 httplog.go:90] GET /healthz: (242.326µs) 0 [Go-http-client/1.1 127.0.0.1:40710]
I0919 11:36:22.065393  108424 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 11:36:22.065593  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.065634  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:22.065703  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:22.065753  108424 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:22.065936  108424 httplog.go:90] GET /healthz: (775.533µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:22.113165  108424 client.go:361] parsed scheme: "endpoint"
I0919 11:36:22.113248  108424 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 11:36:22.156384  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.156421  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:22.156432  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:22.156439  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:22.156478  108424 httplog.go:90] GET /healthz: (1.54729ms) 0 [Go-http-client/1.1 127.0.0.1:40710]
I0919 11:36:22.166178  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.166248  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:22.166261  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:22.166270  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:22.166318  108424 httplog.go:90] GET /healthz: (1.167573ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:22.255867  108424 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.538052ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:22.255884  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.548687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.255942  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.675834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.257019  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.257076  108424 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 11:36:22.257086  108424 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 11:36:22.257095  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 11:36:22.257124  108424 httplog.go:90] GET /healthz: (1.405325ms) 0 [Go-http-client/1.1 127.0.0.1:40718]
I0919 11:36:22.257726  108424 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.094415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.258918  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.602267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.258949  108424 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.228044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40710]
I0919 11:36:22.259294  108424 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 11:36:22.260702  108424 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.178174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40718]
I0919 11:36:22.260713  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.174926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.260724  108424 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.738251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.261829  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (773.565µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40718]
I0919 11:36:22.262570  108424 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.390734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.262739  108424 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 11:36:22.262764  108424 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 11:36:22.264034  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.421147ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40718]
I0919 11:36:22.265125  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (658.271µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.265566  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.265589  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.265693  108424 httplog.go:90] GET /healthz: (721.038µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.266258  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (799.394µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.267273  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (575.125µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.268286  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (683.984µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.270101  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.253085ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.272429  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.763812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.272714  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 11:36:22.278321  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (5.327211ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.281090  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.806839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.281351  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 11:36:22.282546  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (895.588µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.284761  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.766185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.285033  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 11:36:22.286561  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.328937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.288264  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.355402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.288512  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:36:22.289549  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (802.029µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.291185  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.175474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.291414  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 11:36:22.292249  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (701.581µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.293884  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.262621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.294205  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 11:36:22.295171  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (731.269µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.296912  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.295661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.297186  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 11:36:22.298384  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (945.755µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.300522  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.617744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.300787  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 11:36:22.301964  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (950.949µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.304656  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.166832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.304930  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 11:36:22.306074  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (964.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.308619  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.898282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.308929  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 11:36:22.310083  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (956.383µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.312487  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.813894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.312758  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 11:36:22.313925  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (951.88µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.316470  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.142305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.316839  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 11:36:22.317993  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (909.131µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.319968  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.410658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.320229  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 11:36:22.321324  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (815.41µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.323123  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.376759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.323387  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 11:36:22.324406  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (715.365µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.326169  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.258062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.326546  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 11:36:22.327595  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (828.99µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.329245  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.19877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.329538  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 11:36:22.330634  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (806.207µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.332671  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.316895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.332965  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 11:36:22.334514  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (971.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.336849  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.855795ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.337157  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:36:22.338296  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (859.445µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.340632  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.834958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.340909  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 11:36:22.342141  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (970.919µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.344717  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.932053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.345164  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 11:36:22.346569  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.067784ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.348752  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.69223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.349075  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 11:36:22.350638  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.173524ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.352691  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.542566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.352904  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 11:36:22.353880  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (768.735µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.355692  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.355740  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.355872  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.531169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.355882  108424 httplog.go:90] GET /healthz: (966.455µs) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:22.356121  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 11:36:22.357265  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (821.632µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.359571  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.846921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.359816  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:36:22.360957  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (947.928µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.362869  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.506693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.363084  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 11:36:22.364196  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (890.618µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.365809  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.365836  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.365866  108424 httplog.go:90] GET /healthz: (928.676µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.366352  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.690924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.366698  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:36:22.367932  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.02201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.370074  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.499255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.370304  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 11:36:22.371391  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (897.757µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.373694  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.887789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.373925  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:36:22.375091  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (887.68µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.377232  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.658741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.377491  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:36:22.378533  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (830.871µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.380537  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.517757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.380791  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:36:22.382168  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.105164ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.384109  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.384582  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:36:22.385663  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (808.28µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.387914  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.774787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.388128  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:36:22.389448  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.082671ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.391623  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.538863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.392025  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:36:22.393147  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (864.124µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.395226  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.59227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.395585  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:36:22.397145  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.126568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.399288  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.689162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.399593  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:36:22.400790  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (880.198µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.402806  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.532828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.403076  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:36:22.404253  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (931.16µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.406292  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.523818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.406649  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:36:22.407722  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (849.385µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.409514  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.353372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.409733  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:36:22.410785  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (840.83µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.412712  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.474142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.412954  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:36:22.414066  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (869.69µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.415883  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.311733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.416173  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:36:22.417526  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.011964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.419334  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.418494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.419643  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:36:22.420846  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.008116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.422604  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.378819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.422840  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:36:22.424000  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (843.544µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.425851  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.307598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.426167  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:36:22.427297  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (838.109µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.429181  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.359691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.429513  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:36:22.430711  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (931.918µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.432985  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.687402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.433353  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:36:22.434655  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (909.949µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.436500  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.320774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.436751  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:36:22.437867  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (880.752µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.440187  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.89532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.440429  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:36:22.441433  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (783.533µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.443258  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.403628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.443500  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:36:22.444623  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (930.688µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.446964  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.800039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.447194  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:36:22.448317  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (894.955µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.450637  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.900213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.450949  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:36:22.455729  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.170679ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.456021  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.456046  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.456090  108424 httplog.go:90] GET /healthz: (1.121606ms) 0 [Go-http-client/1.1 127.0.0.1:40716]
I0919 11:36:22.466926  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.467120  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.467320  108424 httplog.go:90] GET /healthz: (1.822723ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.479068  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.21671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.479467  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:36:22.495798  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.596625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.513132  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.513299  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.513316  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.513331  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.513138  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.513286  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.516344  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.00797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.516713  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:36:22.538859  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (4.650475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.556192  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.556236  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.556278  108424 httplog.go:90] GET /healthz: (1.396999ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:22.556533  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.286699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.556813  108424 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:36:22.566072  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.566106  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.566146  108424 httplog.go:90] GET /healthz: (1.06132ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.575779  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.47476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.596759  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.409884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.597083  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 11:36:22.616110  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.766216ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.636705  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.427133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.637099  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 11:36:22.656001  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.656039  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.656083  108424 httplog.go:90] GET /healthz: (1.197342ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:22.656159  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.85477ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.659539  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.659540  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.659938  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.659951  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.660283  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.660562  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.666106  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.666138  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.666211  108424 httplog.go:90] GET /healthz: (1.199444ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.676622  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.310288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.676875  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 11:36:22.693512  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.693537  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.693554  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.693571  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.693810  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.693911  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.694596  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.695816  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.490432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.714497  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.716679  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.321458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.717038  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 11:36:22.735943  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.618829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.756132  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.756175  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.756218  108424 httplog.go:90] GET /healthz: (1.343077ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:22.756615  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.327375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.756830  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 11:36:22.766253  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.766286  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.766319  108424 httplog.go:90] GET /healthz: (1.248688ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.776037  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.513327ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.796482  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.252047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.796921  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 11:36:22.815650  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.345877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.836808  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.528858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.837135  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 11:36:22.855948  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.60815ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:22.856158  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.856335  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.856440  108424 httplog.go:90] GET /healthz: (1.526906ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:22.866036  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.866076  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.866114  108424 httplog.go:90] GET /healthz: (1.014975ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.868054  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:22.876275  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.014348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.876672  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 11:36:22.895908  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.502586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.916645  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.280861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.917030  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 11:36:22.935865  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.569584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.956161  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.956233  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.956318  108424 httplog.go:90] GET /healthz: (1.491104ms) 0 [Go-http-client/1.1 127.0.0.1:40716]
I0919 11:36:22.956986  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.704894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.957342  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 11:36:22.966339  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:22.966464  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:22.966506  108424 httplog.go:90] GET /healthz: (1.396061ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.975818  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.547505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.996325  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.011943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:22.996642  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 11:36:23.015740  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.406108ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.036273  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.047892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.036524  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 11:36:23.055980  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.783719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.056102  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.056345  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.056502  108424 httplog.go:90] GET /healthz: (1.545719ms) 0 [Go-http-client/1.1 127.0.0.1:40716]
I0919 11:36:23.066096  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.066253  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.066404  108424 httplog.go:90] GET /healthz: (1.360913ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.076987  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.696926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.077462  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 11:36:23.095898  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.398101ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.119952  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.503336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.120309  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 11:36:23.135881  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.519761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.156215  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.156252  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.156321  108424 httplog.go:90] GET /healthz: (1.41864ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:23.157345  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.003341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.157634  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 11:36:23.166627  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.166659  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.166711  108424 httplog.go:90] GET /healthz: (1.523004ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.175757  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.470754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.197066  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.780516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.197441  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 11:36:23.215945  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.600442ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.237061  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.70865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.237308  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 11:36:23.256790  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.256835  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.256886  108424 httplog.go:90] GET /healthz: (2.013648ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:23.257053  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.457794ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.266397  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.266443  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.266488  108424 httplog.go:90] GET /healthz: (1.362728ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.276769  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.437811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.277059  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 11:36:23.296134  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.818659ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.316821  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.588965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.317116  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 11:36:23.335946  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.657985ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.356412  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.356475  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.356546  108424 httplog.go:90] GET /healthz: (1.658226ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:23.356721  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.427314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.356958  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 11:36:23.367163  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.367221  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.367273  108424 httplog.go:90] GET /healthz: (1.422996ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.375850  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.525545ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.397705  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.380001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.397963  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 11:36:23.415681  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.416511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.436548  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.173655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.436851  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 11:36:23.456092  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.814329ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.456704  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.456738  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.456769  108424 httplog.go:90] GET /healthz: (1.036326ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:23.466527  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.466563  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.466619  108424 httplog.go:90] GET /healthz: (1.337626ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.476501  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.233181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.476776  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 11:36:23.495999  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.53241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.513440  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.513454  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.513461  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.513480  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.513650  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.513650  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.516930  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.574047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.517300  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 11:36:23.536041  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.712111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.556232  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.556590  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.556810  108424 httplog.go:90] GET /healthz: (1.938433ms) 0 [Go-http-client/1.1 127.0.0.1:40716]
I0919 11:36:23.556767  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.510638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.557262  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 11:36:23.566163  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.566212  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.566287  108424 httplog.go:90] GET /healthz: (1.181797ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.580604  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (6.268982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.596887  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.583152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.597489  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 11:36:23.616071  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.712851ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.636918  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.574763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.637444  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 11:36:23.656213  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.650623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.656265  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.656292  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.656352  108424 httplog.go:90] GET /healthz: (1.32673ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:23.659759  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.659806  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.660166  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.660257  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.660416  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.660786  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.666714  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.666910  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.667088  108424 httplog.go:90] GET /healthz: (1.861457ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.680164  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.196437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.680434  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 11:36:23.693740  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.693768  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.693777  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.693787  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.693997  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.694082  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.694806  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.695693  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.370907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.714657  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.716706  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.39666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.716998  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 11:36:23.735977  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.694623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.757086  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.720075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:23.757390  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 11:36:23.757939  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.757966  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.758005  108424 httplog.go:90] GET /healthz: (1.031682ms) 0 [Go-http-client/1.1 127.0.0.1:40716]
I0919 11:36:23.766840  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.766885  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.766932  108424 httplog.go:90] GET /healthz: (1.789648ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.775883  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.533945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
E0919 11:36:23.782900  108424 factory.go:590] Error getting pod permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/test-pod for retry: Get http://127.0.0.1:35423/api/v1/namespaces/permit-plugin98d730eb-10d8-4279-8fc9-576571991a2a/pods/test-pod: dial tcp 127.0.0.1:35423: connect: connection refused; retrying...
I0919 11:36:23.796076  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.909861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.796310  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 11:36:23.815798  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.473115ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.836864  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.584819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.837134  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 11:36:23.856035  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.856079  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.856126  108424 httplog.go:90] GET /healthz: (1.213003ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:23.856263  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.971462ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.866585  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.866624  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.866675  108424 httplog.go:90] GET /healthz: (1.463944ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.868225  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:23.876562  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.214449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.876899  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 11:36:23.895827  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.545243ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.916708  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.348874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.917100  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 11:36:23.936201  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.832787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.956313  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.956355  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.956436  108424 httplog.go:90] GET /healthz: (1.376033ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:23.956622  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.303176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.956868  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 11:36:23.966972  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:23.967017  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:23.967074  108424 httplog.go:90] GET /healthz: (1.468933ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.975872  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.545793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.997425  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.063516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:23.997701  108424 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 11:36:24.016030  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.652917ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.018704  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.021685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.036731  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.380805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.037115  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 11:36:24.055717  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.055755  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.055775  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.472737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.055799  108424 httplog.go:90] GET /healthz: (925.761µs) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:24.057699  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.398162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.066262  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.066557  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.066770  108424 httplog.go:90] GET /healthz: (1.673892ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.076727  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.438607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.076979  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:36:24.096094  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.741092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.098517  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.616459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.117349  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.979417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.117677  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:36:24.136185  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.781032ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.138466  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.734404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.156320  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.156386  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.156466  108424 httplog.go:90] GET /healthz: (1.47014ms) 0 [Go-http-client/1.1 127.0.0.1:40716]
I0919 11:36:24.157597  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.155038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.157931  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:36:24.166929  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.167201  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.167452  108424 httplog.go:90] GET /healthz: (2.125631ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.175966  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.64677ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.178535  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.826238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.196585  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.206402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.196881  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:36:24.215869  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.509074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.217840  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.412339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.236818  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.469602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.237114  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:36:24.256127  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.256383  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.256252  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.613209ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.256588  108424 httplog.go:90] GET /healthz: (1.590226ms) 0 [Go-http-client/1.1 127.0.0.1:40716]
I0919 11:36:24.258727  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.671491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.266749  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.266790  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.266827  108424 httplog.go:90] GET /healthz: (1.602168ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.276922  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.323801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.277168  108424 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:36:24.296125  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.783838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.298327  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.575583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.311068  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.840991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0919 11:36:24.313180  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.462472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0919 11:36:24.315384  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.568844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0919 11:36:24.317651  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.901826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.317968  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 11:36:24.336262  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.810011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.338116  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.33459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.356220  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.356570  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.356590  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.330347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.356778  108424 httplog.go:90] GET /healthz: (1.82386ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:24.357162  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 11:36:24.366385  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.366539  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.366697  108424 httplog.go:90] GET /healthz: (1.494521ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.376247  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.920698ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.378237  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.523989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.396644  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.182375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.397132  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 11:36:24.415789  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.424091ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.418318  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.655199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.436709  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.413455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.437090  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 11:36:24.455730  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.455764  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.455797  108424 httplog.go:90] GET /healthz: (928.518µs) 0 [Go-http-client/1.1 127.0.0.1:40716]
I0919 11:36:24.455835  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.436386ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.457439  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.218233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.466477  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.466519  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.466568  108424 httplog.go:90] GET /healthz: (1.427584ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.476537  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.12928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.476852  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 11:36:24.496121  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.612011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.498383  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.603351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.513793  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.513802  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.513919  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.513933  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.514060  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.514116  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.516790  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.56418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.517236  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 11:36:24.536116  108424 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.652579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.538034  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.22048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.556115  108424 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 11:36:24.556166  108424 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 11:36:24.556225  108424 httplog.go:90] GET /healthz: (1.321727ms) 0 [Go-http-client/1.1 127.0.0.1:40706]
I0919 11:36:24.556698  108424 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.406834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.557041  108424 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 11:36:24.566399  108424 httplog.go:90] GET /healthz: (1.204444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.568313  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.467977ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.571054  108424 httplog.go:90] POST /api/v1/namespaces: (2.122309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.573202  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.590944ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.577320  108424 httplog.go:90] POST /api/v1/namespaces/default/services: (3.566889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.578880  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (941.358µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.581165  108424 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.733542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.656419  108424 httplog.go:90] GET /healthz: (1.465386ms) 200 [Go-http-client/1.1 127.0.0.1:40716]
W0919 11:36:24.657838  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.657893  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.657946  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.657960  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.658011  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.658028  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.658040  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.658050  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.658062  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.658074  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.658090  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.658166  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:36:24.658247  108424 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 11:36:24.658266  108424 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 11:36:24.658516  108424 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 11:36:24.658785  108424 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 11:36:24.658807  108424 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 11:36:24.659920  108424 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (740.284µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40716]
I0919 11:36:24.660060  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.660002  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.660378  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.660355  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.660601  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.660948  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.661189  108424 get.go:251] Starting watch for /api/v1/pods, rv=59826 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m39s
I0919 11:36:24.694024  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.694024  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.694067  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.694074  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.694033  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.694210  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.695035  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.714855  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.758805  108424 shared_informer.go:227] caches populated
I0919 11:36:24.758852  108424 shared_informer.go:204] Caches are synced for scheduler 
I0919 11:36:24.759151  108424 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759183  108424 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759273  108424 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759406  108424 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759434  108424 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759439  108424 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759454  108424 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759422  108424 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759529  108424 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759551  108424 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759669  108424 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759698  108424 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759720  108424 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759446  108424 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759736  108424 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759743  108424 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759744  108424 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759903  108424 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.759309  108424 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.760050  108424 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.760886  108424 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (630.196µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40734]
I0919 11:36:24.760923  108424 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (553.714µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40730]
I0919 11:36:24.760891  108424 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (348.115µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40736]
I0919 11:36:24.761023  108424 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (733.05µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40728]
I0919 11:36:24.761105  108424 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (414.672µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40738]
I0919 11:36:24.761023  108424 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (445.323µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40726]
I0919 11:36:24.761525  108424 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.234937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40722]
I0919 11:36:24.761572  108424 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59826 labels= fields= timeout=7m12s
I0919 11:36:24.761686  108424 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59826 labels= fields= timeout=7m54s
I0919 11:36:24.761791  108424 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (569.072µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40724]
I0919 11:36:24.762224  108424 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59826 labels= fields= timeout=8m14s
I0919 11:36:24.762444  108424 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59826 labels= fields= timeout=7m57s
I0919 11:36:24.762550  108424 get.go:251] Starting watch for /api/v1/services, rv=59940 labels= fields= timeout=7m26s
I0919 11:36:24.762558  108424 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59826 labels= fields= timeout=5m8s
I0919 11:36:24.762619  108424 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59826 labels= fields= timeout=6m59s
I0919 11:36:24.762709  108424 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (1.540702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40706]
I0919 11:36:24.762830  108424 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59826 labels= fields= timeout=6m31s
I0919 11:36:24.763220  108424 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (361.969µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40722]
I0919 11:36:24.763461  108424 get.go:251] Starting watch for /api/v1/nodes, rv=59826 labels= fields= timeout=9m57s
I0919 11:36:24.763902  108424 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59826 labels= fields= timeout=7m19s
I0919 11:36:24.859164  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859206  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859212  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859231  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859236  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859241  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859245  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859249  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859253  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859351  108424 shared_informer.go:227] caches populated
I0919 11:36:24.859379  108424 shared_informer.go:227] caches populated
I0919 11:36:24.862551  108424 httplog.go:90] POST /api/v1/namespaces: (2.473699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0919 11:36:24.863175  108424 node_lifecycle_controller.go:327] Sending events to api server.
I0919 11:36:24.863335  108424 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0919 11:36:24.863425  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:36:24.863532  108424 taint_manager.go:162] Sending events to api server.
I0919 11:36:24.863641  108424 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0919 11:36:24.863741  108424 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0919 11:36:24.863791  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 11:36:24.863848  108424 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 11:36:24.863960  108424 node_lifecycle_controller.go:488] Starting node controller
I0919 11:36:24.864050  108424 shared_informer.go:197] Waiting for caches to sync for taint
I0919 11:36:24.864206  108424 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.864260  108424 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.865459  108424 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (777.108µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40744]
I0919 11:36:24.866409  108424 get.go:251] Starting watch for /api/v1/namespaces, rv=59942 labels= fields= timeout=6m41s
I0919 11:36:24.868426  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:24.964076  108424 shared_informer.go:227] caches populated
I0919 11:36:24.964136  108424 shared_informer.go:227] caches populated
I0919 11:36:24.964142  108424 shared_informer.go:227] caches populated
I0919 11:36:24.964511  108424 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.964541  108424 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.964549  108424 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.964549  108424 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.964570  108424 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.964607  108424 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0919 11:36:24.965783  108424 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (596.025µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40750]
I0919 11:36:24.965783  108424 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (589.655µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40746]
I0919 11:36:24.965853  108424 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (543.743µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40748]
I0919 11:36:24.966571  108424 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59826 labels= fields= timeout=7m5s
I0919 11:36:24.966664  108424 get.go:251] Starting watch for /api/v1/pods, rv=59826 labels= fields= timeout=7m0s
I0919 11:36:24.966867  108424 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59826 labels= fields= timeout=8m24s
I0919 11:36:25.040591  108424 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0919 11:36:25.040635  108424 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0919 11:36:25.040690  108424 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0919 11:36:25.040725  108424 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0919 11:36:25.040733  108424 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0919 11:36:25.040737  108424 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0919 11:36:25.040940  108424 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"fc841bdd-2177-47b1-86b7-7b22ee28078a", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0919 11:36:25.040982  108424 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"ccb802d9-fb76-4832-b559-ef818c9444ac", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0919 11:36:25.040993  108424 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"fe6d5b20-2cb8-4a7d-a865-e085511a459d", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0919 11:36:25.044050  108424 httplog.go:90] POST /api/v1/namespaces/default/events: (2.831967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:25.046985  108424 httplog.go:90] POST /api/v1/namespaces/default/events: (2.386029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:25.050178  108424 httplog.go:90] POST /api/v1/namespaces/default/events: (2.340643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:25.064208  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064240  108424 shared_informer.go:204] Caches are synced for taint 
I0919 11:36:25.064337  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064418  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064424  108424 taint_manager.go:186] Starting NoExecuteTaintManager
I0919 11:36:25.064428  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064644  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064655  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064662  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064669  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064698  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064705  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064713  108424 shared_informer.go:227] caches populated
I0919 11:36:25.064720  108424 shared_informer.go:227] caches populated
I0919 11:36:25.068155  108424 httplog.go:90] POST /api/v1/nodes: (2.606934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.068421  108424 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0919 11:36:25.068439  108424 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 11:36:25.068643  108424 taint_manager.go:438] Updating known taints on node node-0: []
I0919 11:36:25.071848  108424 httplog.go:90] POST /api/v1/nodes: (2.650211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.071971  108424 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0919 11:36:25.072001  108424 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:36:25.072018  108424 taint_manager.go:438] Updating known taints on node node-1: []
I0919 11:36:25.074334  108424 httplog.go:90] POST /api/v1/nodes: (1.988668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.074581  108424 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0919 11:36:25.074606  108424 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 11:36:25.074624  108424 taint_manager.go:438] Updating known taints on node node-2: []
I0919 11:36:25.077237  108424 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/pods: (2.246897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.077684  108424 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69", Name:"testpod-2"}
I0919 11:36:25.077769  108424 scheduling_queue.go:830] About to try and schedule pod taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2
I0919 11:36:25.077781  108424 scheduler.go:530] Attempting to schedule pod: taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2
I0919 11:36:25.078237  108424 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2", node "node-1"
I0919 11:36:25.078264  108424 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2", node "node-1": all PVCs bound and nothing to do
I0919 11:36:25.078407  108424 factory.go:606] Attempting to bind testpod-2 to node-1
I0919 11:36:25.080696  108424 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/pods/testpod-2/binding: (2.017106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.080956  108424 scheduler.go:662] pod taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2 is bound successfully on node "node-1", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0919 11:36:25.081059  108424 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69", Name:"testpod-2"}
I0919 11:36:25.083510  108424 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/events: (2.240453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.180343  108424 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/pods/testpod-2: (2.206543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.182075  108424 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/pods/testpod-2: (1.183983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.183672  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.13154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.186812  108424 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.664134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.187951  108424 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (597.824µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.190678  108424 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.028093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.190928  108424 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:36:25.187127282 +0000 UTC m=+334.214000064,}] Taint to Node node-1
I0919 11:36:25.190970  108424 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0919 11:36:25.289727  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.973613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.389706  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.03107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.490023  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.824433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.513950  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.513959  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.514070  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.514088  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.514542  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.514672  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.589608  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.823925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.660241  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.660356  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.660644  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.660646  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.660791  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.661136  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.689594  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.874938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.694238  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.694252  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.694276  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.694286  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.694287  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.694519  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.695235  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.715105  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.762076  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.762287  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.762401  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.762454  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.763165  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.763772  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.790104  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.300047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.868614  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.890055  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.15065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:25.966409  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:25.989837  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.009242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.089769  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.945039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.189642  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.869448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.289669  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.926736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.389673  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.958971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.489458  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.784543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.514219  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.514240  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.514257  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.514273  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.514851  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.514861  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.589717  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.916994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.660441  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.660542  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.660854  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.660863  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.660987  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.661279  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.689656  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.029027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.694465  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.694498  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.694477  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.694477  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.694550  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.694622  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.695385  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.715448  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.762499  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.762519  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.762609  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.762631  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.763426  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.763916  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.789093  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.460792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.868857  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.889620  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.958633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:26.966627  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:26.989784  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.023616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.090043  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.241082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.189830  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.0505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.290484  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.660436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.389744  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.945664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.489562  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.880296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.514437  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.514512  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.514482  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.514501  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.515040  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.515040  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.581006  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.997168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58358]
I0919 11:36:27.582975  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.386272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58358]
I0919 11:36:27.585206  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.263328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58358]
I0919 11:36:27.588591  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.131636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.660612  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.660673  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.661001  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.661100  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.661270  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.661547  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.689609  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.918353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.694648  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.694668  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.694684  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.694684  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.694689  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.694909  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.695643  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.715673  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.762894  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.762957  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.763096  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.763143  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.763582  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.764073  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.789562  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.873014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.869003  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.890646  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.816407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:27.966867  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:27.990029  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.115642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.090272  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.531998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.190409  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.614523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.289754  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.086252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.389606  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.98479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.489936  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.002068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.514679  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.514697  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.514709  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.514725  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.515229  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.515276  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.589866  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.160195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.660823  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.660878  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.661096  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.661240  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.661470  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.661682  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.689650  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.00976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.694995  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.695006  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.695068  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.695085  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.695232  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.695234  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.695866  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.715910  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.763079  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.763097  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.763332  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.763338  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.763757  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.764237  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.789448  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.844923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.869189  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.889476  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.773202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:28.967111  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:28.989562  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.881776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.089685  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.093526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.189914  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.125894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.289878  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.011867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.389800  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.002306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.462515  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.988903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:29.464968  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.912559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:29.467125  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.534248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:29.490029  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.276551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.515111  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.515111  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.515133  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.515154  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.515521  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.515521  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.589772  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.051092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.661139  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.661136  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.661268  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.661637  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.661648  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.661881  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.689686  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.952537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.695204  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.695209  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.695214  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.695234  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.695441  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.695455  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.696021  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.716166  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.763282  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.763296  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.763574  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.763693  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.763944  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.764434  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.789309  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.680482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.869400  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.889736  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.051629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:29.967346  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:29.989776  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.984148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.065054  108424 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0919 11:36:30.065135  108424 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0919 11:36:30.065305  108424 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0919 11:36:30.065336  108424 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0919 11:36:30.065345  108424 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0919 11:36:30.065375  108424 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0919 11:36:30.065416  108424 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0919 11:36:30.065475  108424 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
I0919 11:36:30.065479  108424 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"95266ff1-53b3-4b54-8f0d-685e2335075a", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
W0919 11:36:30.065529  108424 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
I0919 11:36:30.065557  108424 node_lifecycle_controller.go:770] Node node-1 is NotReady as of 2019-09-19 11:36:30.065538352 +0000 UTC m=+339.092411129. Adding it to the Taint queue.
W0919 11:36:30.065632  108424 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0919 11:36:30.065636  108424 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"a0b483b2-7f64-433b-a1f1-35cc85978e54", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0919 11:36:30.065716  108424 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0919 11:36:30.065732  108424 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"c63715f5-dbe1-444e-bc4e-ba1f6bf2314c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0919 11:36:30.068416  108424 httplog.go:90] POST /api/v1/namespaces/default/events: (2.263404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.070776  108424 httplog.go:90] POST /api/v1/namespaces/default/events: (1.722894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.072900  108424 httplog.go:90] POST /api/v1/namespaces/default/events: (1.667067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.077689  108424 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (737.516µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.081934  108424 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.350561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.082245  108424 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-19 11:36:30.076712222 +0000 UTC m=+339.103584980,}] Taint to Node node-1
I0919 11:36:30.082297  108424 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 11:36:30.082488  108424 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:36:30.082516  108424 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 11:36:30 +0000 UTC}]
I0919 11:36:30.082620  108424 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2 at 2019-09-19 11:36:30.082604708 +0000 UTC m=+339.109477491 to be fired at 2019-09-19 11:36:30.082604708 +0000 UTC m=+339.109477491
I0919 11:36:30.082674  108424 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2
I0919 11:36:30.082811  108424 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2
I0919 11:36:30.085240  108424 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/events: (1.790541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:30.086537  108424 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/pods/testpod-2: (3.563214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.088788  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.302515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.189541  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.893703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.289670  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.901787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.389737  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.014584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.489965  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.28855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.515422  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.515461  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.515437  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.515488  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.515758  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.515763  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.589921  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.222596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.661307  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.661465  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.661315  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.661960  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.662164  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.662143  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.689411  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.646722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.695706  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.695750  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.695976  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.696140  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.695995  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.696012  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.696018  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.716434  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.763476  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.763527  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.763850  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.763867  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.764093  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.764564  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.789461  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.809443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.869763  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.889748  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.951469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:30.967577  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:30.989706  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.04474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.089764  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.041591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.189708  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.98568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.289713  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.984636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.389424  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.830036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.490154  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.385404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.515628  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.515628  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.515628  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.515684  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.516013  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.516263  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.589753  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.052121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.661691  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.661732  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.661707  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.662332  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.662411  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.662530  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.689575  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.961921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.695916  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.695923  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.696277  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.696445  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.696454  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.696463  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.696568  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.716640  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.763880  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.763880  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.764148  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.764152  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.764354  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.764731  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.789883  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.021368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.869972  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.889329  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.710224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:31.967806  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:31.989405  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.745861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.089487  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.753566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.189498  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.757101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.289612  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.905184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.389494  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.813396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.489573  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.93492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.516041  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.516091  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.516076  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.516106  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.516309  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.516435  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.589917  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.133605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.661878  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.661879  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.661899  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.662449  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.662593  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.662666  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.689985  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.32241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.696159  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.696209  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.696477  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.696597  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.696616  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.696645  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.696675  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.716878  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.764120  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.764313  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.764320  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.764121  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.764601  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.764887  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.791469  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.908499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.870225  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.889620  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.820638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:32.968026  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:32.989563  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.863427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.090174  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.630457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.190196  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.007772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.289871  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.143506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.389789  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.742106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.489863  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.145127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.516194  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.516231  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.516259  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.516506  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.516321  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.516703  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.589804  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.085038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.662412  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.662425  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.662716  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.662865  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.662865  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.662880  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.689576  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.909291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.696340  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.696340  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.696795  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.696803  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.696832  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.696847  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.696943  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.717293  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.764463  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.764510  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.764559  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.764610  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.764765  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.765040  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.789601  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.863861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.870403  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.889688  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.034425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:33.968447  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:33.989325  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.605911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.089494  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.807538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.189538  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.846127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.290086  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.987871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.311539  108424 httplog.go:90] GET /api/v1/namespaces/default: (2.134254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0919 11:36:34.313600  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.581876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0919 11:36:34.315095  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.160322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0919 11:36:34.389609  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.928846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.489259  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.675626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.516585  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.516630  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.516599  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.516680  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.516797  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.516805  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.568578  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.532607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.570397  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.329527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.572004  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.143056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.589473  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.882377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.662625  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.662626  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.662804  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.663071  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.663086  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.663074  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.689176  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.531642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.696570  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.696570  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.696953  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.696991  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.696987  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.697015  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.697087  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.717512  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.764634  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.764676  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.764698  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.764698  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.764928  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.765195  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.789544  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.833608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.870596  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.889801  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.020338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:34.968691  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:34.989489  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.808832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:35.066063  108424 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000534232s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 11:36:35.066153  108424 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0919 11:36:35.066170  108424 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0919 11:36:35.066216  108424 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0919 11:36:35.069617  108424 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.778992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:35.070022  108424 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0919 11:36:35.070056  108424 controller_utils.go:124] Update ready status of pods on node [node-0]
I0919 11:36:35.070211  108424 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"95266ff1-53b3-4b54-8f0d-685e2335075a", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0919 11:36:35.071047  108424 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (666.629µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:35.071859  108424 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (1.55572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:35.072528  108424 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.006969724s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 11:36:35.072602  108424 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0919 11:36:35.072615  108424 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0919 11:36:35.072625  108424 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0919 11:36:35.072654  108424 httplog.go:90] POST /api/v1/namespaces/default/events: (1.820958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40768]
I0919 11:36:35.074520  108424 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.479026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:35.074903  108424 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:36:35.070141896 +0000 UTC m=+344.097014659,}] Taint to Node node-0
I0919 11:36:35.074945  108424 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0919 11:36:35.075211  108424 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.0276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40768]
I0919 11:36:35.075579  108424 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.009928398s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 11:36:35.075719  108424 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0919 11:36:35.075764  108424 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0919 11:36:35.075809  108424 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0919 11:36:35.076509  108424 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (419.814µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:35.077911  108424 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.782534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:35.078347  108424 controller_utils.go:180] Recording status change NodeNotReady event message for node node-2
I0919 11:36:35.078496  108424 controller_utils.go:124] Update ready status of pods on node [node-2]
I0919 11:36:35.078730  108424 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"c63715f5-dbe1-444e-bc4e-ba1f6bf2314c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-2 status is now: NodeNotReady
I0919 11:36:35.079215  108424 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (526.21µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:35.080941  108424 httplog.go:90] POST /api/v1/namespaces/default/events: (1.662808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.081006  108424 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.726767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:35.081502  108424 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:36:35.075859098 +0000 UTC m=+344.102731856,}] Taint to Node node-1
I0919 11:36:35.081835  108424 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-2: (2.634651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40770]
I0919 11:36:35.082086  108424 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0919 11:36:35.082088  108424 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (391.754µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.082514  108424 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (290.588µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40770]
I0919 11:36:35.084122  108424 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.798852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:35.084409  108424 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:36:35.078507891 +0000 UTC m=+344.105380668,}] Taint to Node node-2
I0919 11:36:35.084442  108424 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0919 11:36:35.085980  108424 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.184004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:35.086308  108424 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:36:35.086410  108424 taint_manager.go:438] Updating known taints on node node-1: []
I0919 11:36:35.086469  108424 taint_manager.go:459] All taints were removed from the Node node-1. Cancelling all evictions...
I0919 11:36:35.086482  108424 store.go:362] GuaranteedUpdate of /fe5d4548-d1e4-4a00-9fbf-4fe85b860a84/minions/node-1 failed because of a conflict, going to retry
I0919 11:36:35.086502  108424 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2 at 2019-09-19 11:36:35.086498761 +0000 UTC m=+344.113371532
I0919 11:36:35.088326  108424 httplog.go:90] PATCH /api/v1/nodes/node-1: (5.025058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.088630  108424 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 11:36:25 +0000 UTC,}] Taint
I0919 11:36:35.088649  108424 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:36:35.088674  108424 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 11:36:30 +0000 UTC}]
I0919 11:36:35.088743  108424 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2 at 2019-09-19 11:36:35.088728918 +0000 UTC m=+344.115601700 to be fired at 2019-09-19 11:36:35.088728918 +0000 UTC m=+344.115601700
I0919 11:36:35.088790  108424 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2
I0919 11:36:35.089006  108424 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2
I0919 11:36:35.089596  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.118763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:35.090268  108424 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/pods/testpod-2: (1.227591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.091474  108424 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/events/testpod-2.15c5d3dcb47a96b8: (2.133872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40760]
I0919 11:36:35.189652  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.924583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.289755  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.999338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.389830  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.09054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.489564  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.857675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.516836  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.516859  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.516938  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.516851  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.516859  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.516972  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.589592  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.94017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.662833  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.662832  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.662925  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.663233  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.663239  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.663295  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.689391  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.63343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.696783  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.696782  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.697117  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.697195  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.697152  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.697186  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.697174  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.717724  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.765011  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.765038  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.765064  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.765067  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.765102  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.765406  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.790073  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.371328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.870800  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.889815  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.028423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:35.968917  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:35.989650  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.802088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.089321  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.477969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.190251  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.540282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.289475  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.803934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.389516  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.751892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.489514  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.83933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.517066  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.517122  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.517116  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.517127  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.517138  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.517140  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.589531  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.896026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.663032  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.663032  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.663062  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.663539  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.663393  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.663421  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.689518  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.822128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.696968  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.696968  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.697381  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.697412  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.697418  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.697441  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.697563  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.717927  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.765185  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.765195  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.765244  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.765250  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.765264  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.765551  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.789395  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.743661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.870989  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.889577  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.902364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:36.969102  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:36.989828  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.124144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.089525  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.801345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.189819  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.02143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.289764  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.037792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.389979  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.286464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.489518  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.815675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.517657  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.517864  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.517868  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.518009  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.518010  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.518223  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.581134  108424 httplog.go:90] GET /api/v1/namespaces/default: (2.016163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58358]
I0919 11:36:37.583162  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.392107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58358]
I0919 11:36:37.585530  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.624324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58358]
I0919 11:36:37.589504  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.870804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.663488  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.663606  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.663703  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.663726  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.663735  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.663778  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.689606  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.925302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.697201  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.697201  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.697575  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.697798  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.697796  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.697971  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.698030  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.718119  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.765346  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.765391  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.765420  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.765425  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.765429  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.765678  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.789653  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.913212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.871177  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.889658  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.925514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:37.969330  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:37.989822  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.097322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.090298  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.452953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.190448  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.561458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.289804  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.087668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.392139  108424 httplog.go:90] GET /api/v1/nodes/node-1: (4.391311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.489827  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.060359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.517905  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.518054  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.518054  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.518160  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.518158  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.518397  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.589830  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.179099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.663779  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.663881  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.663883  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.663906  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.663883  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.663881  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.690082  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.347003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.697441  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.697442  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.697739  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.697882  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.697977  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.698161  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.698171  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.718385  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.765612  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.765653  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.765647  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.765775  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.765777  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.765819  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.789573  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.954354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.871423  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.889605  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.902893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:38.969584  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:38.989681  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.887845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.089657  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.914459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.189686  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.909388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.289704  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.018353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.390008  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.237136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.462459  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.699842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:39.464602  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.458255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:39.466092  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.064858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:39.489797  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.14344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.518226  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.518293  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.518301  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.518497  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.518535  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.518587  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.589612  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.895854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.664016  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.664087  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.664102  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.664123  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.664140  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.664146  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.689618  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.904045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.697713  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.697713  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.697897  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.698048  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.698125  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.698305  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.698323  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.718604  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.765872  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.765916  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.765938  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.765956  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.765977  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.765976  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.789813  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.065111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.871630  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.889886  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.183647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:39.969823  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:39.989654  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.870214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:40.086471  108424 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.02097028s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:36:40.086563  108424 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.021072869s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:40.086579  108424 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.021089566s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:40.086630  108424 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.021140323s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:40.086690  108424 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-19 11:36:40.086675205 +0000 UTC m=+349.113547988. Adding it to the Taint queue.
I0919 11:36:40.086722  108424 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.021186379s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:36:40.086886  108424 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.021347127s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:40.086969  108424 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.021432592s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:40.087047  108424 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.021510775s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:40.088309  108424 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (785.187µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40772]
I0919 11:36:40.089216  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.69149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.092072  108424 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.467768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.092475  108424 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:36:40.092632  108424 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 11:36:30 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-19 11:36:40 +0000 UTC}]
I0919 11:36:40.092756  108424 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2 at 2019-09-19 11:36:40.092730908 +0000 UTC m=+349.119603689 to be fired at 2019-09-19 11:36:40.092730908 +0000 UTC m=+349.119603689
W0919 11:36:40.092830  108424 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2}. Skipping.
I0919 11:36:40.092837  108424 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-19 11:36:40.087180446 +0000 UTC m=+349.114053221,}] Taint to Node node-1
I0919 11:36:40.093839  108424 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (595.14µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.097936  108424 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.67611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.098324  108424 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 11:36:40.098453  108424 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/unreachable  NoExecute 2019-09-19 11:36:40 +0000 UTC}]
I0919 11:36:40.098547  108424 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2 at 2019-09-19 11:36:40.098534008 +0000 UTC m=+349.125406790 to be fired at 2019-09-19 11:41:40.098534008 +0000 UTC m=+649.125406790
W0919 11:36:40.098597  108424 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictions505e7aa0-7b35-4502-89c4-b0ccb78feb69/testpod-2}. Skipping.
I0919 11:36:40.098513  108424 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 11:36:40.098721  108424 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.0330722s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:36:40.098797  108424 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.033149353s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:40.098840  108424 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.03319266s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:40.098909  108424 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.033262692s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:40.098971  108424 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-19 11:36:40.098958014 +0000 UTC m=+349.125830796. Adding it to the Taint queue.
I0919 11:36:40.189671  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.001264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.289669  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.908067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.389651  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.54101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.490225  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.553871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.518481  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.518496  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.518549  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.518666  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.518684  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.518777  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.590383  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.475135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.664271  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.664272  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.664282  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.664686  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.664693  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.664704  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.689761  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.040508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.697944  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.697944  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.698022  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.698178  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.698309  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.698628  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.698745  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.719053  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.766069  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.766141  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.766148  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.766154  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.766162  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.766088  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.789598  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.933234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.871822  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.889968  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.271554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:40.970146  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:40.990001  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.202032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.090270  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.146326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.190020  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.151298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.289290  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.501701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.389538  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.855972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.489428  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.78363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.518729  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.518921  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.518736  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.518747  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.518795  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.518862  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.590418  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.68637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.664535  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.664679  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.664696  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.664941  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.664946  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.664965  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.689660  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.979375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.698197  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.698198  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.698198  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.698352  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.698504  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.698755  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.698972  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.719308  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.766285  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.766309  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.766381  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.766290  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.766297  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.766616  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.790126  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.428508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.872088  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.889942  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.095758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:41.970382  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:41.990004  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.275373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.089828  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.187258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.189725  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.059986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.290042  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.306644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.389546  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.715834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.489730  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.011951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.519078  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.519249  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.519246  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.519273  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.519301  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.519313  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.593924  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.502926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.664881  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.664881  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.664881  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.665239  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.665241  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.665421  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.689788  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.001251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.698541  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.698577  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.698549  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.698552  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.698680  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.699023  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.699197  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.721066  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.766430  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.766472  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.766543  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.766550  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.766574  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.766878  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.790218  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.146975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.872228  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.889819  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.074758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:42.970618  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:42.993125  108424 httplog.go:90] GET /api/v1/nodes/node-1: (5.195305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.089737  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.98677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.189610  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.916506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.289531  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.854379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.389472  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.833053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.489721  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.921521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.519287  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.519446  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.519506  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.519507  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.519529  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.519705  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.589904  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.19802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.665262  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.665262  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.665510  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.665676  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.665707  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.665722  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.690201  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.485037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.698730  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.698744  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.698757  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.698730  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.698789  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.699187  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.699396  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.721200  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.766634  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.766640  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.766717  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.766634  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.766816  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.767094  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.789625  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.913776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.872451  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.890165  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.38224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:43.970860  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:43.990178  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.362485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.090126  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.35485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.190493  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.758889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.289898  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.134223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.311382  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.786577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0919 11:36:44.313545  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.539185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0919 11:36:44.315309  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.210823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51154]
I0919 11:36:44.389642  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.882539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.489737  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.914969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.519520  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.519678  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.519700  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.519908  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.519690  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.519702  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.569081  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.807834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.571501  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.715082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.573425  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.327052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.589720  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.960573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.665520  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.665520  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.665722  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.665906  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.665930  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.665945  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.690204  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.203108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.698943  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.698943  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.698944  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.698956  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.698956  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.699377  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.699601  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.721477  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.766826  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.766827  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.766888  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.767177  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.766857  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.767394  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.789711  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.035439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.872637  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.890206  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.520545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:44.971076  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:44.989418  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.762196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.089688  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.832789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.099405  108424 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.033861115s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:36:45.099527  108424 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.034035776s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:45.099559  108424 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.034069615s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:45.099577  108424 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.034081191s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:45.099668  108424 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.034131018s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:36:45.099687  108424 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.034150606s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:45.099714  108424 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.034177853s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:45.099730  108424 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.034193511s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:45.099768  108424 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-19 11:36:45.099752635 +0000 UTC m=+354.126625418. Adding it to the Taint queue.
I0919 11:36:45.099815  108424 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.034168559s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 11:36:45.099863  108424 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.034216028s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:45.099883  108424 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.034236255s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:45.099897  108424 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.034250651s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 11:36:25 +0000 UTC,LastTransitionTime:2019-09-19 11:36:35 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 11:36:45.189585  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.879686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.289636  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.89114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.389993  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.169541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.489611  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.958105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.519736  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.519925  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.520092  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.520109  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.520268  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.520115  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.589944  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.312742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.665749  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.665750  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.665957  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.666070  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.666094  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.666099  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.689716  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.990309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.699190  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.699235  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.699190  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.699205  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.699211  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.699563  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.699776  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.721704  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.767063  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.767068  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.767076  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.767427  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.767537  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.767563  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.789733  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.086823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.872831  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.890186  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.404077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:45.971235  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:45.989723  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.032916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.089826  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.992262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.169222  108424 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.771169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:46.171759  108424 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.668537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:46.173673  108424 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.382828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 11:36:46.189799  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.95335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.289597  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.767982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.389687  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.956774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.489731  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.942407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.520117  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.520178  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.520475  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.520546  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.520558  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.520592  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.589830  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.13516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.665954  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.666101  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.665952  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.666186  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.666208  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.666332  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.689667  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.889459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.699432  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.699674  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.699732  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.699695  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.699760  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.699788  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.699936  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.721963  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.767338  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.767621  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.767388  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.767649  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.767355  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.767740  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.790035  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.32924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.873021  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.890133  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.013134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:46.971437  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:46.989312  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.64713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.089632  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.847257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.189801  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.156472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.290155  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.385667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.389798  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.021168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.489827  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.032695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.520299  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.520539  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.520642  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.520714  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.520738  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.520810  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.580994  108424 httplog.go:90] GET /api/v1/namespaces/default: (1.698933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58358]
I0919 11:36:47.583122  108424 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.47871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58358]
I0919 11:36:47.584864  108424 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.124552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58358]
I0919 11:36:47.588810  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.291696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.666261  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.666472  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.666274  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.666274  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.666273  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.666342  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.689819  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.010312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.699774  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.699932  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.699946  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.699957  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.699913  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.699932  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.700125  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.722138  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.767773  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.767800  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.767804  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.767812  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.767779  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.767887  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.789702  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.909377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.873189  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.889912  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.171066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:47.971683  108424 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 11:36:47.990184  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.479359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:48.089756  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.085465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:48.189916  108424 httplog.go:90] GET /api/v1/nodes/node-1: (2.194522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:48.290092  108424 httplog.go:90] GET /api/v1/nodes/node-1: (1.981192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40762]
I0919 11:36:48.389771  108424 httplog.go:90]