This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 10 failed / 2858 succeeded
Started2019-09-19 11:44
Elapsed30m6s
Revision
Buildergke-prow-ssd-pool-1a225945-fp0z
Refs master:b8866250
82703:64f51b78
pod85f0801e-dad2-11e9-b7bb-32cecfce85d6
infra-commitfe9f237a8
pod85f0801e-dad2-11e9-b7bb-32cecfce85d6
repok8s.io/kubernetes
repo-commiteac1680bd9d5abdcbc9242b67fc9885816bc64a2
repos{u'k8s.io/kubernetes': u'master:b88662505d288297750becf968bf307dacf872fa,82703:64f51b78efa76ae772f62791f7faadf3b30923d8'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0919 12:07:55.134052  108421 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 12:07:55.134083  108421 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 12:07:55.134097  108421 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 12:07:55.134107  108421 master.go:259] Using reconciler: 
I0919 12:07:55.136174  108421 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.136349  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.136374  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.137255  108421 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 12:07:55.137291  108421 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.137579  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.137605  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.137717  108421 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 12:07:55.138843  108421 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 12:07:55.138878  108421 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.138994  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.139001  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.139010  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.139116  108421 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 12:07:55.140269  108421 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 12:07:55.140310  108421 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.140399  108421 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 12:07:55.140470  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.140489  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.141271  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.142234  108421 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 12:07:55.142285  108421 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 12:07:55.142623  108421 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.142756  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.142775  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.143508  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.143837  108421 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 12:07:55.144053  108421 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.144389  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.144442  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.144525  108421 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 12:07:55.146013  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.146264  108421 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 12:07:55.146329  108421 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 12:07:55.146501  108421 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.147526  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.147559  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.147704  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.148192  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.149296  108421 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 12:07:55.149513  108421 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.149654  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.149672  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.149770  108421 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 12:07:55.150989  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.151270  108421 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 12:07:55.151464  108421 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 12:07:55.151471  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.151587  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.151603  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.152180  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.152412  108421 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 12:07:55.152558  108421 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 12:07:55.152600  108421 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.152764  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.152785  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.153867  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.154118  108421 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 12:07:55.154285  108421 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 12:07:55.154331  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.154491  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.154593  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.155558  108421 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 12:07:55.155570  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.155748  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.155828  108421 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 12:07:55.155899  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.155921  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.157340  108421 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 12:07:55.157538  108421 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.157577  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.157660  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.157681  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.157791  108421 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 12:07:55.158755  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.159468  108421 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 12:07:55.159679  108421 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 12:07:55.159679  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.160000  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.160030  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.160857  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.161552  108421 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 12:07:55.161590  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.161746  108421 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 12:07:55.161812  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.161829  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.162994  108421 watch_cache.go:405] Replace watchCache (rev: 30267) 
I0919 12:07:55.163483  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.163513  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.164369  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.164678  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.164721  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.165388  108421 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 12:07:55.165434  108421 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 12:07:55.165647  108421 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 12:07:55.165955  108421 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.166239  108421 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.167323  108421 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.167737  108421 watch_cache.go:405] Replace watchCache (rev: 30268) 
I0919 12:07:55.168190  108421 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.169017  108421 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.169925  108421 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.170444  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.170602  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.170832  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.171356  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.172240  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.172523  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.173382  108421 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.173750  108421 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.174538  108421 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.174802  108421 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.175542  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.175756  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.175924  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.176083  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.176287  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.176460  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.176832  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.177837  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.178147  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.179193  108421 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.180075  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.180377  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.180692  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.181574  108421 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.181904  108421 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.182637  108421 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.183578  108421 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.184238  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.185270  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.185584  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.185743  108421 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 12:07:55.185773  108421 master.go:461] Enabling API group "authentication.k8s.io".
I0919 12:07:55.185790  108421 master.go:461] Enabling API group "authorization.k8s.io".
I0919 12:07:55.185964  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.186180  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.186209  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.187196  108421 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 12:07:55.187230  108421 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 12:07:55.187436  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.187580  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.187608  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.188412  108421 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 12:07:55.188651  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.188765  108421 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 12:07:55.188791  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.188821  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.188844  108421 watch_cache.go:405] Replace watchCache (rev: 30268) 
I0919 12:07:55.190173  108421 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 12:07:55.190202  108421 master.go:461] Enabling API group "autoscaling".
I0919 12:07:55.190354  108421 watch_cache.go:405] Replace watchCache (rev: 30268) 
I0919 12:07:55.190366  108421 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.190552  108421 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 12:07:55.190567  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.190591  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.191671  108421 watch_cache.go:405] Replace watchCache (rev: 30268) 
I0919 12:07:55.192203  108421 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 12:07:55.192384  108421 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.192561  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.192573  108421 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 12:07:55.192583  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.194183  108421 watch_cache.go:405] Replace watchCache (rev: 30268) 
I0919 12:07:55.194877  108421 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 12:07:55.195025  108421 master.go:461] Enabling API group "batch".
I0919 12:07:55.195386  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.195592  108421 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 12:07:55.197527  108421 watch_cache.go:405] Replace watchCache (rev: 30268) 
I0919 12:07:55.197883  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.198017  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.203524  108421 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 12:07:55.203563  108421 master.go:461] Enabling API group "certificates.k8s.io".
I0919 12:07:55.203753  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.203934  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.203956  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.204062  108421 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 12:07:55.205215  108421 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 12:07:55.205439  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.205612  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.205616  108421 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 12:07:55.205636  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.207146  108421 watch_cache.go:405] Replace watchCache (rev: 30270) 
I0919 12:07:55.209496  108421 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 12:07:55.209517  108421 master.go:461] Enabling API group "coordination.k8s.io".
I0919 12:07:55.209531  108421 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 12:07:55.209733  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.209925  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.209952  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.210062  108421 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 12:07:55.211592  108421 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 12:07:55.211595  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.211639  108421 master.go:461] Enabling API group "extensions".
I0919 12:07:55.211665  108421 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 12:07:55.211916  108421 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.212092  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.212119  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.213541  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.213653  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.214264  108421 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 12:07:55.214313  108421 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 12:07:55.214485  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.214822  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.214847  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.215715  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.216490  108421 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 12:07:55.216548  108421 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 12:07:55.216653  108421 master.go:461] Enabling API group "networking.k8s.io".
I0919 12:07:55.216745  108421 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.216909  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.216934  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.217562  108421 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 12:07:55.217585  108421 master.go:461] Enabling API group "node.k8s.io".
I0919 12:07:55.217650  108421 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 12:07:55.217934  108421 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.218054  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.218095  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.218113  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.218530  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.220353  108421 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 12:07:55.220573  108421 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.220723  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.220745  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.220861  108421 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 12:07:55.222620  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.222961  108421 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 12:07:55.222977  108421 master.go:461] Enabling API group "policy".
I0919 12:07:55.223026  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.223159  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.223176  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.223273  108421 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 12:07:55.224833  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.225263  108421 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 12:07:55.225485  108421 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 12:07:55.225483  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.225645  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.225663  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.227052  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.228313  108421 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 12:07:55.228367  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.228507  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.228534  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.228627  108421 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 12:07:55.229815  108421 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 12:07:55.230029  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.230137  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.230161  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.230452  108421 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 12:07:55.232144  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.235201  108421 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 12:07:55.235402  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.235758  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.235863  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.236035  108421 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 12:07:55.238068  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.238445  108421 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 12:07:55.238662  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.238834  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.238856  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.238953  108421 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 12:07:55.240468  108421 watch_cache.go:405] Replace watchCache (rev: 30271) 
I0919 12:07:55.241788  108421 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 12:07:55.241835  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.241956  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.241976  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.242053  108421 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 12:07:55.243412  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.243717  108421 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 12:07:55.243962  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.244069  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.244093  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.244348  108421 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 12:07:55.245695  108421 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 12:07:55.245724  108421 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 12:07:55.247986  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.248135  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.248160  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.248251  108421 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 12:07:55.248751  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.250583  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.252139  108421 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 12:07:55.252384  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.252690  108421 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 12:07:55.253647  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.253745  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.254474  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.255013  108421 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 12:07:55.255191  108421 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 12:07:55.255307  108421 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 12:07:55.255579  108421 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 12:07:55.255876  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.256160  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.256263  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.256354  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.257825  108421 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 12:07:55.258003  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.258131  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.258159  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.258261  108421 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 12:07:55.259012  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.259714  108421 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 12:07:55.259807  108421 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.259827  108421 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 12:07:55.259920  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.259944  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.261501  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.263068  108421 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 12:07:55.263143  108421 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.263211  108421 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 12:07:55.263562  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.263612  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.264276  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.264468  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.266045  108421 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 12:07:55.266264  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.267680  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.266436  108421 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 12:07:55.267820  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.268719  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.270698  108421 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 12:07:55.271209  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.271524  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.271624  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.271798  108421 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 12:07:55.273071  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.274763  108421 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 12:07:55.275039  108421 master.go:461] Enabling API group "storage.k8s.io".
I0919 12:07:55.275339  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.275579  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.275668  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.274941  108421 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 12:07:55.277281  108421 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 12:07:55.277321  108421 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 12:07:55.277524  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.277683  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.277710  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.278368  108421 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 12:07:55.278587  108421 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.278881  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.278907  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.278991  108421 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 12:07:55.279069  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.281170  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.281478  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.281682  108421 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 12:07:55.281736  108421 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 12:07:55.281863  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.282057  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.282083  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.283038  108421 watch_cache.go:405] Replace watchCache (rev: 30272) 
I0919 12:07:55.284534  108421 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 12:07:55.284726  108421 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.284895  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.284913  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.285015  108421 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 12:07:55.285803  108421 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 12:07:55.285822  108421 master.go:461] Enabling API group "apps".
I0919 12:07:55.285858  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.285960  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.285976  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.286059  108421 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 12:07:55.287654  108421 watch_cache.go:405] Replace watchCache (rev: 30273) 
I0919 12:07:55.288321  108421 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 12:07:55.288369  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.288513  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.288533  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.288819  108421 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 12:07:55.289382  108421 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 12:07:55.289441  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.289571  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.289593  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.289664  108421 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 12:07:55.290701  108421 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 12:07:55.290740  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.290853  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.290871  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.290961  108421 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 12:07:55.291434  108421 watch_cache.go:405] Replace watchCache (rev: 30273) 
I0919 12:07:55.293077  108421 watch_cache.go:405] Replace watchCache (rev: 30273) 
I0919 12:07:55.293503  108421 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 12:07:55.293522  108421 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 12:07:55.293555  108421 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.293841  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:55.293862  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:55.293931  108421 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 12:07:55.295603  108421 watch_cache.go:405] Replace watchCache (rev: 30273) 
I0919 12:07:55.295669  108421 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 12:07:55.295693  108421 master.go:461] Enabling API group "events.k8s.io".
I0919 12:07:55.295919  108421 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.296102  108421 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.296225  108421 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 12:07:55.296395  108421 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.296528  108421 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.296628  108421 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.296739  108421 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.296932  108421 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.297021  108421 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.297162  108421 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.297373  108421 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.298145  108421 watch_cache.go:405] Replace watchCache (rev: 30273) 
I0919 12:07:55.298753  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.299132  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.300409  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.300827  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.302092  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.302616  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.304123  108421 watch_cache.go:405] Replace watchCache (rev: 30273) 
I0919 12:07:55.306017  108421 watch_cache.go:405] Replace watchCache (rev: 30273) 
I0919 12:07:55.308383  108421 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.308782  108421 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.309966  108421 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.310265  108421 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:07:55.310504  108421 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 12:07:55.311402  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.311696  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.312051  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.313177  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.314179  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.315454  108421 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.315806  108421 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.316758  108421 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.318023  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.318361  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.319261  108421 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:07:55.319392  108421 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 12:07:55.320608  108421 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.320994  108421 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.321713  108421 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.322667  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.323261  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.324065  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.325074  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.325826  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.326449  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.327517  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.328275  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:07:55.328452  108421 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 12:07:55.329238  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.330163  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:07:55.330303  108421 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 12:07:55.331003  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.331995  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.332380  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.333059  108421 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.333658  108421 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.336125  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.336881  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:07:55.337004  108421 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 12:07:55.337960  108421 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.339038  108421 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.339391  108421 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.340240  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.340588  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.341095  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.341950  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.342319  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.342666  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.343948  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.344409  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.344791  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:07:55.344934  108421 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 12:07:55.344999  108421 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 12:07:55.345954  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.347042  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.348019  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.349120  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.350160  108421 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6debe7c9-e026-4436-a4f6-318a71a853c6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:07:55.355019  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.355046  108421 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 12:07:55.355058  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.355068  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.355077  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.355086  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.355119  108421 httplog.go:90] GET /healthz: (211.269µs) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:55.356085  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.350248ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.359122  108421 httplog.go:90] GET /api/v1/services: (1.156009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.363620  108421 httplog.go:90] GET /api/v1/services: (985.579µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.365992  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.366036  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.366048  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.366058  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.366066  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.366094  108421 httplog.go:90] GET /healthz: (195.066µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:55.367670  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.478479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.370358  108421 httplog.go:90] POST /api/v1/namespaces: (1.754337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.370602  108421 httplog.go:90] GET /api/v1/services: (2.37223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:55.372186  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.068982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.373607  108421 httplog.go:90] GET /api/v1/services: (975.206µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:55.374879  108421 httplog.go:90] POST /api/v1/namespaces: (2.177481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.376442  108421 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (868.384µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.378491  108421 httplog.go:90] POST /api/v1/namespaces: (1.586052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.457112  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.457154  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.457171  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.457182  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.457191  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.457242  108421 httplog.go:90] GET /healthz: (283.4µs) 0 [Go-http-client/1.1 127.0.0.1:57386]
I0919 12:07:55.467489  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.467523  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.467535  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.467547  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.467555  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.467588  108421 httplog.go:90] GET /healthz: (269.208µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.556944  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.556985  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.556999  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.557009  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.557018  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.557062  108421 httplog.go:90] GET /healthz: (257.332µs) 0 [Go-http-client/1.1 127.0.0.1:57386]
I0919 12:07:55.567397  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.567455  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.567468  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.567479  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.567488  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.567520  108421 httplog.go:90] GET /healthz: (286.027µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.656880  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.656922  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.656934  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.656944  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.656957  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.656989  108421 httplog.go:90] GET /healthz: (251.94µs) 0 [Go-http-client/1.1 127.0.0.1:57386]
I0919 12:07:55.667443  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.667477  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.667490  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.667500  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.667508  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.667547  108421 httplog.go:90] GET /healthz: (276.977µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.756927  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.756966  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.756980  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.756989  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.756997  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.757044  108421 httplog.go:90] GET /healthz: (280.11µs) 0 [Go-http-client/1.1 127.0.0.1:57386]
I0919 12:07:55.767525  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.767566  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.767577  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.767587  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.767595  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.767629  108421 httplog.go:90] GET /healthz: (327.539µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.856880  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.856920  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.856948  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.856957  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.856964  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.856996  108421 httplog.go:90] GET /healthz: (255.382µs) 0 [Go-http-client/1.1 127.0.0.1:57386]
I0919 12:07:55.867644  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.867686  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.867698  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.867708  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.867716  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.867747  108421 httplog.go:90] GET /healthz: (273.098µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:55.956891  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.956937  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.956954  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.956963  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.956970  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.957001  108421 httplog.go:90] GET /healthz: (271.804µs) 0 [Go-http-client/1.1 127.0.0.1:57386]
I0919 12:07:55.967517  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:55.967553  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:55.967580  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:55.967590  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:55.967603  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:55.967647  108421 httplog.go:90] GET /healthz: (295.718µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:56.059923  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:56.059954  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.059966  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:56.059974  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:56.059983  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:56.060027  108421 httplog.go:90] GET /healthz: (298.276µs) 0 [Go-http-client/1.1 127.0.0.1:57386]
I0919 12:07:56.067496  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:07:56.067533  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.067545  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:56.067555  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:56.067563  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:56.067621  108421 httplog.go:90] GET /healthz: (322.913µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:56.133850  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:07:56.133942  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:07:56.157747  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.157787  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:56.157798  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:56.157808  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:56.157848  108421 httplog.go:90] GET /healthz: (1.089828ms) 0 [Go-http-client/1.1 127.0.0.1:57386]
I0919 12:07:56.168312  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.168341  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:56.168361  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:56.168370  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:56.168410  108421 httplog.go:90] GET /healthz: (1.129734ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:56.257682  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.257713  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:56.257731  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:56.257740  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:56.257782  108421 httplog.go:90] GET /healthz: (1.008665ms) 0 [Go-http-client/1.1 127.0.0.1:57386]
I0919 12:07:56.268966  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.268999  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:56.269010  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:56.269019  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:56.269069  108421 httplog.go:90] GET /healthz: (1.785287ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:56.357794  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.753975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57534]
I0919 12:07:56.358007  108421 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.04546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.358456  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.18011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:56.359451  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.135087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.360708  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (887.091µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.360773  108421 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.380717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57534]
I0919 12:07:56.361043  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.361058  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:07:56.361068  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:07:56.361076  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:07:56.361113  108421 httplog.go:90] GET /healthz: (4.217055ms) 0 [Go-http-client/1.1 127.0.0.1:57536]
I0919 12:07:56.361142  108421 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.885141ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0919 12:07:56.361455  108421 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 12:07:56.363281  108421 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.568285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57536]
I0919 12:07:56.363287  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.76883ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.363595  108421 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.730676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.365536  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.85878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57536]
I0919 12:07:56.366301  108421 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.614037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.366542  108421 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 12:07:56.366569  108421 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 12:07:56.367050  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.20122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57536]
I0919 12:07:56.367918  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.367953  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.367983  108421 httplog.go:90] GET /healthz: (775.92µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.368261  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (846.172µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57536]
I0919 12:07:56.370004  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (981.531µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.371630  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.129704ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.372837  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (803.342µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.374706  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.409042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.374867  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 12:07:56.375879  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (824.464µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.377806  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.47549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.378230  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 12:07:56.379398  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (915.555µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.381669  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.732443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.382738  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 12:07:56.384011  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.050346ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.386109  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.758991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.386343  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 12:07:56.387685  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.101565ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.389714  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.644591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.389904  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 12:07:56.391118  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.053531ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.393152  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.631953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.393360  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 12:07:56.394598  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (924.817µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.396537  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.511175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.396827  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 12:07:56.397987  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (898.145µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.400119  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.649503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.400327  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 12:07:56.401281  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (784.925µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.403555  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.904014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.403904  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 12:07:56.404990  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (881.478µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.407603  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.129504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.407852  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 12:07:56.409185  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (990.053µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.411242  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.462619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.411630  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 12:07:56.413815  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.974591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.416495  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.131186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.416970  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 12:07:56.418172  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (952.704µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.420612  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.84068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.421045  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 12:07:56.422055  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (787.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.424309  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.716485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.424581  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 12:07:56.425600  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (837.704µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.427751  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.753738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.427994  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 12:07:56.429202  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (923.072µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.431271  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.602482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.431489  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 12:07:56.432525  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (843.902µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.434668  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.567592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.434963  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 12:07:56.436328  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.134556ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.439144  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.359118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.439624  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 12:07:56.440808  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (971.941µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.442917  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.713752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.443112  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 12:07:56.444263  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (899.187µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.446364  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.663399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.446625  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 12:07:56.448026  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.195551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.450388  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.548639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.450724  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 12:07:56.452147  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.118516ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.454500  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.823749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.454823  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 12:07:56.456060  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (908.396µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.458735  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.059506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.458983  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 12:07:56.459263  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.459443  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.459706  108421 httplog.go:90] GET /healthz: (2.450479ms) 0 [Go-http-client/1.1 127.0.0.1:57538]
I0919 12:07:56.460721  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.434087ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.463026  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.744577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.463483  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 12:07:56.464636  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (892.023µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.467007  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.830919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.467442  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 12:07:56.467988  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.468143  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.468341  108421 httplog.go:90] GET /healthz: (1.12189ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.469035  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.194124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.471598  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.471852  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 12:07:56.472926  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (871.771µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.475001  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.64824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.475263  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 12:07:56.476385  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (913.56µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.478523  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.712239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.478760  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 12:07:56.479838  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (895.65µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.481944  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.637673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.482260  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 12:07:56.483570  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (949.693µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.485557  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.5452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.485972  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 12:07:56.487348  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.163808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.489937  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.045135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.490239  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 12:07:56.491621  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.084402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.493886  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.703898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.494128  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 12:07:56.495332  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (940.17µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.497775  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.911825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.498030  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 12:07:56.499262  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (995.703µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.501270  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.603919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.501494  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 12:07:56.502864  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.083758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.505252  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.964686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.505614  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 12:07:56.506800  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (965.233µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.508639  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.502811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.508897  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 12:07:56.510615  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.422871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.513557  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.578744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.513966  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 12:07:56.515100  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (892.371µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.517198  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.644708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.517623  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 12:07:56.518977  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (910.787µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.521390  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.940482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.521816  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 12:07:56.523507  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.30083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.525852  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.754154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.526080  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 12:07:56.527177  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (883.927µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.529676  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.924903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.530023  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 12:07:56.531552  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.132625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.534048  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.064356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.534371  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 12:07:56.535588  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (979.797µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.538206  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.057168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.538681  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 12:07:56.540019  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.040867ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.542867  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.269256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.543107  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 12:07:56.544734  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (953.838µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.546754  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.62852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.547188  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 12:07:56.548477  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.056999ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.550762  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.68916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.551120  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 12:07:56.552249  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (935.944µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.554544  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.739217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.554805  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 12:07:56.556081  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.060778ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.558503  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.558553  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.558582  108421 httplog.go:90] GET /healthz: (1.87043ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:56.558939  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.689614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.559242  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 12:07:56.560290  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (802.731µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.562348  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.493213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.562799  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 12:07:56.563967  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (850.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.566189  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.498649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.566368  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 12:07:56.567738  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.064227ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.568354  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.568397  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.568484  108421 httplog.go:90] GET /healthz: (1.152985ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.578020  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.141003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.578344  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 12:07:56.597812  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.918339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.648188  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (32.155854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.648784  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 12:07:56.650205  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.175671ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.657897  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.107969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.657924  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.657945  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.658004  108421 httplog.go:90] GET /healthz: (1.053803ms) 0 [Go-http-client/1.1 127.0.0.1:57538]
I0919 12:07:56.658147  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 12:07:56.668366  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.668404  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.668458  108421 httplog.go:90] GET /healthz: (1.169007ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.676946  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.140144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.698322  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.281128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.698652  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 12:07:56.717189  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.302138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.750252  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.357486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.758560  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 12:07:56.759375  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.759509  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.759643  108421 httplog.go:90] GET /healthz: (2.297592ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:56.762584  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (3.092581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.772036  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.772192  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.772236  108421 httplog.go:90] GET /healthz: (3.976624ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.783660  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.164988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.786045  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 12:07:56.797726  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.534992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.830809  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.609657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.831230  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 12:07:56.839256  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.350986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.858983  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.860949  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.861257  108421 httplog.go:90] GET /healthz: (3.567252ms) 0 [Go-http-client/1.1 127.0.0.1:57538]
I0919 12:07:56.862267  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.7895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.862546  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 12:07:56.868453  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.868569  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.868688  108421 httplog.go:90] GET /healthz: (1.318009ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.877491  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.577886ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.898740  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.695318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.899218  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 12:07:56.917570  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.579638ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.938815  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.74973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.939221  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 12:07:56.958719  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.6947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:56.959474  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.959507  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.959548  108421 httplog.go:90] GET /healthz: (968.443µs) 0 [Go-http-client/1.1 127.0.0.1:57538]
I0919 12:07:56.969889  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:56.970061  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:56.970223  108421 httplog.go:90] GET /healthz: (1.39936ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.979181  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.534509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:56.979600  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 12:07:57.001909  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.605837ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.018614  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.663049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.019252  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 12:07:57.038864  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.081119ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.058271  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.058298  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.058332  108421 httplog.go:90] GET /healthz: (1.550539ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:57.058838  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.906988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.059171  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 12:07:57.068273  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.068307  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.068345  108421 httplog.go:90] GET /healthz: (1.15656ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.077385  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.452968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.102488  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.47384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.102945  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 12:07:57.117358  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.378394ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.139839  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.20864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.140100  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 12:07:57.157481  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.529112ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.163615  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.163646  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.163691  108421 httplog.go:90] GET /healthz: (6.061853ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:57.168352  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.168382  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.168436  108421 httplog.go:90] GET /healthz: (1.127065ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.178646  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.734638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.178923  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 12:07:57.197369  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.452618ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.218703  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.771677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.219038  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 12:07:57.239851  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.509725ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.264221  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.264268  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.264311  108421 httplog.go:90] GET /healthz: (6.352864ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:57.264674  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.593799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.264918  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 12:07:57.269929  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.269961  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.270004  108421 httplog.go:90] GET /healthz: (1.068121ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.277904  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (2.000142ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.298227  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.33777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.298526  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 12:07:57.317510  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.421797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.338269  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.000631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.338569  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 12:07:57.362754  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (5.668086ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.362847  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.362871  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.362911  108421 httplog.go:90] GET /healthz: (5.780144ms) 0 [Go-http-client/1.1 127.0.0.1:57538]
I0919 12:07:57.368224  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.368256  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.368297  108421 httplog.go:90] GET /healthz: (993.877µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.378740  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.805731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.378994  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 12:07:57.398449  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (2.485875ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.418700  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.791117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.421748  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 12:07:57.438522  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.297215ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.458486  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.458519  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.458560  108421 httplog.go:90] GET /healthz: (1.920647ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:57.459653  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.711743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.460301  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 12:07:57.470145  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.470173  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.470218  108421 httplog.go:90] GET /healthz: (2.727225ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.477708  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.808216ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.502955  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.530996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.503276  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 12:07:57.517057  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.16401ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.538016  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.115288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.538293  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 12:07:57.557173  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.266084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.557938  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.557963  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.557996  108421 httplog.go:90] GET /healthz: (1.266176ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:57.568531  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.568558  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.568604  108421 httplog.go:90] GET /healthz: (1.296075ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.578189  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.987106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.578444  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 12:07:57.598655  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (2.788005ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.618317  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.345671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.618563  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 12:07:57.637161  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.105054ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.657900  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.657933  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.657969  108421 httplog.go:90] GET /healthz: (911.069µs) 0 [Go-http-client/1.1 127.0.0.1:57538]
I0919 12:07:57.658605  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.606035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.658841  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 12:07:57.668234  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.668261  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.668295  108421 httplog.go:90] GET /healthz: (1.049182ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.677022  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.166662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.698388  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.398193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.698663  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 12:07:57.717470  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.601576ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.738071  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.116917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.738513  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 12:07:57.757344  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.450224ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.758907  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.758933  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.758968  108421 httplog.go:90] GET /healthz: (1.442322ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:57.768532  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.768564  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.768605  108421 httplog.go:90] GET /healthz: (1.276763ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.778232  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.313318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.778534  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 12:07:57.797373  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.49287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.818381  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.47297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.818672  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 12:07:57.837272  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.307687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.858226  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.858260  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.858299  108421 httplog.go:90] GET /healthz: (1.642093ms) 0 [Go-http-client/1.1 127.0.0.1:57538]
I0919 12:07:57.858801  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.858681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.859030  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 12:07:57.868263  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.868295  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.868335  108421 httplog.go:90] GET /healthz: (1.107376ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.877117  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.199166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.898238  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.271943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.898924  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 12:07:57.917128  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.217371ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.937929  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.02617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.938181  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 12:07:57.957251  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.376967ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:57.958218  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.958246  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.958284  108421 httplog.go:90] GET /healthz: (1.179087ms) 0 [Go-http-client/1.1 127.0.0.1:57538]
I0919 12:07:57.968314  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:57.968345  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:57.968390  108421 httplog.go:90] GET /healthz: (1.169986ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.977903  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.037572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:57.978185  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 12:07:58.000793  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (4.917246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.017788  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.941436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.018125  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 12:07:58.037063  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.159048ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.058838  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.058867  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.058910  108421 httplog.go:90] GET /healthz: (2.251594ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:58.059245  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.212743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.059515  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 12:07:58.068159  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.068190  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.068229  108421 httplog.go:90] GET /healthz: (979.493µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.079095  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.387273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.097340  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.436096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.097870  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 12:07:58.117056  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.17256ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.118824  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.237146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.138328  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.357495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.138714  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 12:07:58.157410  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.466704ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.157696  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.157717  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.157746  108421 httplog.go:90] GET /healthz: (1.061308ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:58.159387  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.424262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.168489  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.168519  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.168553  108421 httplog.go:90] GET /healthz: (1.235466ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.177820  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.92427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.178317  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 12:07:58.197365  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.442489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.199147  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.294338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.217973  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.030632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.218489  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 12:07:58.236901  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.057691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.238262  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.021646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.258272  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.258311  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.258349  108421 httplog.go:90] GET /healthz: (1.690969ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:58.258654  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.688415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.258943  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 12:07:58.268476  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.268509  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.268557  108421 httplog.go:90] GET /healthz: (1.2425ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.277313  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.32068ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.278986  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.074529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.298306  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.357887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.298685  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 12:07:58.317202  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.212449ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.319067  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.255251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.339187  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.436204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.339677  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 12:07:58.357790  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.358073  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.358513  108421 httplog.go:90] GET /healthz: (1.786682ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:58.357826  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.917545ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.360720  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.452151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.368592  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.368630  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.368707  108421 httplog.go:90] GET /healthz: (1.326919ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.378207  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.264374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.378569  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 12:07:58.397057  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.005386ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.398969  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.326263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.418000  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.150748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.418237  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 12:07:58.437038  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.161395ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.438924  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.318636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.457810  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.457843  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.457884  108421 httplog.go:90] GET /healthz: (1.187904ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:58.458052  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.097448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.458275  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 12:07:58.468352  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.468657  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.469198  108421 httplog.go:90] GET /healthz: (1.920723ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.477103  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.262205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.478845  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.185199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.498484  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.477871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.499029  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 12:07:58.517770  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.774872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.519738  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.208856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.538483  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.537118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.538768  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 12:07:58.557247  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.337946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.558020  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.558047  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.558081  108421 httplog.go:90] GET /healthz: (1.43143ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:58.558958  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.272928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.569389  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.569452  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.569493  108421 httplog.go:90] GET /healthz: (2.227638ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.578083  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.877895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.578438  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 12:07:58.597395  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.397698ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.599461  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.260864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.617895  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.981823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.618346  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 12:07:58.637556  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.452338ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.639267  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.167043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.657876  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:07:58.657910  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:07:58.657948  108421 httplog.go:90] GET /healthz: (1.13578ms) 0 [Go-http-client/1.1 127.0.0.1:57384]
I0919 12:07:58.658307  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.331675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.658628  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 12:07:58.668800  108421 httplog.go:90] GET /healthz: (1.517356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.670391  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.226584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.672686  108421 httplog.go:90] POST /api/v1/namespaces: (1.858971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.674157  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (922.179µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.678774  108421 httplog.go:90] POST /api/v1/namespaces/default/services: (4.137491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.680163  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (993.68µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.682322  108421 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.684301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.757881  108421 httplog.go:90] GET /healthz: (1.099503ms) 200 [Go-http-client/1.1 127.0.0.1:57538]
W0919 12:07:58.758578  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758649  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758663  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758695  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758705  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758714  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758720  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758733  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758747  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758758  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.758814  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:07:58.758832  108421 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 12:07:58.758843  108421 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 12:07:58.759023  108421 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 12:07:58.759241  108421 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0919 12:07:58.759259  108421 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0919 12:07:58.760239  108421 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (642.875µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:07:58.761267  108421 get.go:251] Starting watch for /api/v1/pods, rv=30267 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m52s
I0919 12:07:58.859277  108421 shared_informer.go:227] caches populated
I0919 12:07:58.859308  108421 shared_informer.go:204] Caches are synced for scheduler 
I0919 12:07:58.859668  108421 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.859698  108421 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.860037  108421 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.860054  108421 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.860355  108421 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.860379  108421 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.860507  108421 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.860528  108421 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.860971  108421 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.860997  108421 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.860989  108421 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.861010  108421 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.861334  108421 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.861358  108421 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.861405  108421 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.861439  108421 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.861730  108421 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.861757  108421 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.861780  108421 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.861793  108421 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 12:07:58.863199  108421 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (542.688µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:58.863310  108421 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (437.973µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0919 12:07:58.863821  108421 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (454.281µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0919 12:07:58.863841  108421 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (266.331µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57606]
I0919 12:07:58.864374  108421 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (482.095µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:07:58.864490  108421 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30272 labels= fields= timeout=6m4s
I0919 12:07:58.864729  108421 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30272 labels= fields= timeout=7m14s
I0919 12:07:58.865035  108421 get.go:251] Starting watch for /api/v1/nodes, rv=30267 labels= fields= timeout=7m50s
I0919 12:07:58.865130  108421 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30273 labels= fields= timeout=7m20s
I0919 12:07:58.865524  108421 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (542.893µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57600]
I0919 12:07:58.865577  108421 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (2.177864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0919 12:07:58.865701  108421 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (591.682µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0919 12:07:58.865914  108421 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30267 labels= fields= timeout=8m32s
I0919 12:07:58.866231  108421 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (424.79µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57598]
I0919 12:07:58.866298  108421 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30272 labels= fields= timeout=6m34s
I0919 12:07:58.866573  108421 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30268 labels= fields= timeout=5m37s
I0919 12:07:58.866733  108421 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30267 labels= fields= timeout=8m7s
I0919 12:07:58.867028  108421 get.go:251] Starting watch for /api/v1/services, rv=30561 labels= fields= timeout=8m9s
I0919 12:07:58.867555  108421 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (415.005µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0919 12:07:58.868169  108421 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30271 labels= fields= timeout=5m18s
I0919 12:07:58.959566  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959595  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959646  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959655  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959660  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959666  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959672  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959678  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959684  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959716  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959727  108421 shared_informer.go:227] caches populated
I0919 12:07:58.959950  108421 node_lifecycle_controller.go:327] Sending events to api server.
I0919 12:07:58.960114  108421 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0919 12:07:58.960164  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:07:58.960332  108421 taint_manager.go:162] Sending events to api server.
I0919 12:07:58.960535  108421 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0919 12:07:58.960603  108421 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0919 12:07:58.960625  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:07:58.960730  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:07:58.960883  108421 node_lifecycle_controller.go:488] Starting node controller
I0919 12:07:58.960915  108421 shared_informer.go:197] Waiting for caches to sync for taint
I0919 12:07:58.963148  108421 httplog.go:90] POST /api/v1/nodes: (1.872313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:58.963577  108421 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0919 12:07:58.965868  108421 httplog.go:90] PUT /api/v1/nodes/testnode/status: (2.193164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:58.968521  108421 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods: (2.129425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:58.968973  108421 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pidpressure-fake-name
I0919 12:07:58.968991  108421 scheduler.go:530] Attempting to schedule pod: node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pidpressure-fake-name
I0919 12:07:58.969130  108421 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pidpressure-fake-name", node "testnode"
I0919 12:07:58.969148  108421 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0919 12:07:58.969206  108421 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0919 12:07:58.971618  108421 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name/binding: (2.009414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:58.972260  108421 scheduler.go:662] pod node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0919 12:07:58.976171  108421 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/events: (2.479832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.071230  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.751992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.181847  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (12.342849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.271107  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.635917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.371560  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.012413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.471466  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.921399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.571228  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.681762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.671215  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.715087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.771220  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.786469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.864460  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:07:59.864500  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:07:59.864629  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:07:59.865043  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:07:59.866517  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:07:59.866797  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:07:59.871070  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.673834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:07:59.971224  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.847779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.071163  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.672958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.170935  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.453492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.271133  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.631985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.371326  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.781441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.471131  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.660344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.571158  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.638043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.671223  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.739762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.771028  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.530782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.864670  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:00.864676  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:00.864886  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:00.865190  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:00.866704  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:00.866938  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:00.870914  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.530789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:00.971718  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.169145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.070993  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.532874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.171311  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.797505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.271228  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.722028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.371029  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.611094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.471118  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.687547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.571251  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.752473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.671400  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.880861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.771036  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.638835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.865388  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:01.865765  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:01.865812  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:01.865892  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:01.866873  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:01.867094  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:01.871560  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.797951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:01.971058  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.68684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.071396  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.906431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.171329  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.903204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.271399  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.894381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.371276  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.844876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.471364  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.891935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.571358  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.850459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.671401  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.986935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.771578  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.033169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.865637  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:02.865939  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:02.866049  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:02.866096  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:02.867052  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:02.867293  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:02.871121  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.669805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:02.971363  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.853724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.071291  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.80835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.171391  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.95712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.271288  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.800456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.372085  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.436882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.471205  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.696539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.573115  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.571001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.671358  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.875837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.771188  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.765955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.865813  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:03.866257  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:03.866341  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:03.866494  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:03.867253  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:03.867527  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:03.871325  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.764612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:03.971278  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.781966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.071474  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.935289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.171466  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.918897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.271348  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.857768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.371500  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.071528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.471687  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.173666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.571125  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.611029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.671167  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.670655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.771472  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.885599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.866045  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:04.866401  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:04.866539  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:04.866669  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:04.867479  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:04.867681  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:04.871241  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.738666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:04.971467  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.980638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.071129  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.708586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.172915  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.170111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.271370  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.861273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.372910  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.881565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.471479  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.950444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.571766  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.283801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.671743  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.016083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.771030  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.519774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.866243  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:05.866643  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:05.866690  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:05.866825  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:05.867673  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:05.867836  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:05.871157  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.7028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:05.971143  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.714854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.072035  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.008082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.171374  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.821551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.271163  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.672089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.371521  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.001151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.471220  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.739183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.571281  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.749732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.671371  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.841657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.771493  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.750513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.866496  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:06.866898  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:06.866899  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:06.866974  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:06.867835  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:06.867975  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:06.871331  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.814229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:06.972913  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.443686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.073113  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.652893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.226693  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (57.226666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.272534  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.992614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.371207  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.722009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.471104  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.603239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.571184  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.704295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.671117  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.724188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.771250  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.749885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.866994  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:07.867245  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:07.867371  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:07.867626  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:07.868181  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:07.868471  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:07.871276  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.809874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:07.971346  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.704407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.073231  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.682431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.171382  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.884202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.271888  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.284771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.371278  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.803506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.472559  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.925335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.571536  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.010303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.672256  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.758636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.673342  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.442629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:08.674181  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.461858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.675936  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.35308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.771557  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.966602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.867116  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:08.867462  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:08.867561  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:08.867807  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:08.868328  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:08.868636  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:08.871152  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.718922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:08.971673  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.97505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.071097  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.631476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.171189  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.685555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.271404  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.885503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.371625  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.039179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.471285  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.740377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.571335  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.77357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.671227  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.771119  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.63431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.867279  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:09.867692  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:09.867686  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:09.867982  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:09.868483  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:09.868809  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:09.871367  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.517208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:09.971358  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.852596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.071381  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.903275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.171445  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.960857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.271484  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.008353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.371267  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.792181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.471452  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.901567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.571334  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.799268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.671253  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.824891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.771245  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.754543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.867847  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:10.867899  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:10.867963  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:10.868112  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:10.868663  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:10.868942  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:10.871545  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.801585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:10.972230  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.522419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.071171  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.565475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.171335  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.815574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.271849  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.359847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.371208  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.695549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.471242  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.674139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.571164  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.730082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.671164  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.768567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.771128  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.656398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.868028  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:11.868029  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:11.868100  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:11.868569  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:11.868851  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:11.869061  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:11.871445  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.638532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:11.972508  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.978833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.071479  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.954373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.171315  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.802197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.271281  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.926834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.371290  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.787391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.471118  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.383608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.575172  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (5.711472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.673308  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.601878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.770956  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.475816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.868182  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:12.868185  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:12.868298  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:12.869001  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:12.869018  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:12.869179  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:12.871460  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.008721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:12.971317  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.799684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.071172  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.657329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.171066  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.575145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.271040  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.542308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.371144  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.722745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.471897  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.229381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.571229  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.770911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.671099  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.711414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.771483  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.918721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.868345  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:13.868413  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:13.868472  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:13.869333  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:13.869753  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:13.870558  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:13.871544  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.665161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:13.971569  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.056803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.071105  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.698165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.171098  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.612695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.271104  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.597418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.371368  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.889602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.471278  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.873325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.571449  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.960659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.672069  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.534173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.771844  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.617059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.868517  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:14.868585  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:14.868601  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:14.870204  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:14.870279  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:14.871369  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.736304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:14.873646  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:14.972195  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.737307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.071130  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.674243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.172854  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.105323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.271308  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.892988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.380956  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.787711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.471383  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.881428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.571229  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.733828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.671106  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.60503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.774405  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.662591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.868731  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:15.868781  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:15.868797  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:15.870507  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:15.870623  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:15.871719  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.214318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:15.873800  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:15.971351  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.811949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.070949  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.485733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.171777  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.316679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.271322  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.804583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.371638  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.153005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.471119  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.593779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.571095  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.564187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.671072  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.581427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.771378  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.851367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.868902  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:16.868914  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:16.868948  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:16.870653  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:16.871279  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:16.871535  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.038878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:16.873879  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:16.971328  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.787271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.071569  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.088216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.171241  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.727836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.271078  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.604038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.372159  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.549738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.471063  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.645433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.571997  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.726796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.671360  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.826084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.771469  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.839202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.869102  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:17.869126  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:17.869240  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:17.870872  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:17.871452  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:17.871542  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.737639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:17.874073  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:17.971384  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.81232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:18.071212  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.696809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:18.171006  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.502813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:18.272659  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.117553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:18.371102  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.611511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:18.471335  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.822908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:18.570992  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.468194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:18.671461  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.929778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:18.672512  108421 httplog.go:90] GET /api/v1/namespaces/default: (2.105622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:18.673974  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.087417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:18.675332  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.030473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:18.771329  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.755839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:18.869504  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:18.869646  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:18.869681  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:18.871151  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.622826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:18.871493  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:18.871701  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:18.874208  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:18.971533  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.952344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.071143  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.655018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.171494  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.966005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.271126  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.496242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.373931  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.089797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.471356  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.8316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.571089  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.613341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.671269  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.804855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.771487  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.990203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.869678  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:19.869787  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:19.869802  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:19.871142  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.713248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:19.871614  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:19.871796  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:19.874281  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:19.971456  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.000836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.071571  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.014508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.171527  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.001534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.271349  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.798395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.371329  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.786466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.471314  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.778946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.571517  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.012403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.674136  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (4.658003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.771240  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.7387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.870256  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:20.870523  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:20.870544  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:20.871159  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.731208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:20.871806  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:20.871922  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:20.874494  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:20.970774  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.34792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.071100  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.637527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.171378  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.812831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.272009  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.356649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.372963  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.533633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.471004  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.595891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.571232  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.821962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.671332  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.861772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.771801  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.518959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.870491  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:21.870697  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:21.870718  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:21.870893  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.468369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:21.875194  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:21.875210  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:21.875189  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:21.972007  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.222605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.071261  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.830987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.171334  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.896111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.270886  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.418116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.372826  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.14416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.470819  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.398332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.578883  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.172141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.671304  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.780023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.771028  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.4406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.870704  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:22.870832  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:22.870853  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:22.871198  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.563463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:22.875390  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:22.875441  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:22.875532  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:22.971340  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.864525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.071732  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.061577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.170999  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.542445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.271067  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.458236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.372900  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.140213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.470909  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.417509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.570751  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.367817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.670899  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.483205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.771267  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.705313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.870982  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:23.871014  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:23.871222  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.704423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:23.871606  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:23.875564  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:23.875607  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:23.875653  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:23.971475  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.989379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.070918  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.541442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.171837  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.591102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.271476  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.047882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.371231  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.737747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.471844  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.171353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.570970  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.540745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.671154  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.734994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.771727  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.160882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.871119  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:24.871178  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:24.871228  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.806606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:24.871767  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:24.875739  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:24.875987  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:24.876513  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:24.971488  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.005565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.071052  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.588743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.171303  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.795508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.272014  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.050576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.371131  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.626496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.471249  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.812324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.584811  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (15.320792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.671183  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.71929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.771916  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.578299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.871317  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.599642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:25.871829  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:25.871856  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:25.871961  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:25.875913  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:25.876204  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:25.876659  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:25.971005  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.537342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.071672  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.148533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.171507  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.10843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.271172  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.644682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.371166  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.66967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.471327  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.81825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.571153  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.750665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.671357  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.856129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.771218  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.734007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.871062  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.50859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:26.871954  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:26.871980  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:26.872107  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:26.876102  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:26.876339  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:26.876829  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:26.971076  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.487041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.070801  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.49188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.171213  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.677821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.270979  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.49918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.371295  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.821907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.471126  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.711449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.575166  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (4.665481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.670839  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.452542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.770993  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.592088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.871146  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.655183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:27.872135  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:27.872136  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:27.872331  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:27.876265  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:27.876482  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:27.876990  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:27.970972  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.569117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:28.072213  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.303642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:28.171145  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.642318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:28.271360  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.949395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:28.371185  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.697211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:28.471071  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.65891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:28.571123  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.653759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:28.672234  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.634094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.673870  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (4.465117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58066]
I0919 12:08:28.674259  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.656111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.675951  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.168116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.771752  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.087053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.870938  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.443356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.872271  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:28.872286  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:28.872499  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:28.876445  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:28.876628  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:28.877174  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:08:28.972779  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (3.268304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.975662  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (2.047481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.982388  108421 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (6.140184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.985026  108421 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pods/pidpressure-fake-name: (1.105981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.985545  108421 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30267&timeoutSeconds=472&watch=true: (30.224661652s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0919 12:08:28.985764  108421 shared_informer.go:223] stop requested
E0919 12:08:28.985779  108421 shared_informer.go:200] unable to sync caches for taint
I0919 12:08:28.985791  108421 node_lifecycle_controller.go:492] Shutting down node controller
E0919 12:08:28.986033  108421 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0919 12:08:28.986271  108421 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30272&timeout=7m14s&timeoutSeconds=434&watch=true: (30.121714133s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57606]
I0919 12:08:28.986408  108421 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30272&timeout=6m4s&timeoutSeconds=364&watch=true: (30.122211787s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57610]
I0919 12:08:28.986495  108421 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30267&timeout=8m32s&timeoutSeconds=512&watch=true: (30.120803619s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0919 12:08:28.986596  108421 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30273&timeout=7m20s&timeoutSeconds=440&watch=true: (30.121688071s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0919 12:08:28.986648  108421 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30272&timeout=6m34s&timeoutSeconds=394&watch=true: (30.120525047s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0919 12:08:28.986737  108421 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30268&timeout=5m37s&timeoutSeconds=337&watch=true: (30.120413746s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57600]
I0919 12:08:28.986765  108421 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30267&timeout=8m7s&timeoutSeconds=487&watch=true: (30.120249072s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0919 12:08:28.986840  108421 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30561&timeout=8m9s&timeoutSeconds=489&watch=true: (30.120124575s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57598]
I0919 12:08:28.986881  108421 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30271&timeout=5m18s&timeoutSeconds=318&watch=true: (30.118926705s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0919 12:08:28.986965  108421 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30267&timeout=7m50s&timeoutSeconds=470&watch=true: (30.122189262s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0919 12:08:28.994389  108421 httplog.go:90] DELETE /api/v1/nodes: (8.84564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:28.994751  108421 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 12:08:28.996640  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.453121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0919 12:08:29.008526  108421 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (11.398154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
--- FAIL: TestNodePIDPressure (33.88s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-120028.xml

Find node-pid-pressure9af0cc2b-9204-4025-b614-76258565db08/pidpressure-fake-name mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.08s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W0919 12:10:09.131955  108421 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 12:10:09.132085  108421 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 12:10:09.132173  108421 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 12:10:09.132243  108421 master.go:259] Using reconciler: 
I0919 12:10:09.133941  108421 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.134295  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.134522  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.135373  108421 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 12:10:09.135434  108421 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.135487  108421 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 12:10:09.135778  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.135811  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.136292  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.136589  108421 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 12:10:09.136632  108421 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.136666  108421 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 12:10:09.136801  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.136815  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.137373  108421 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 12:10:09.137405  108421 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.137459  108421 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 12:10:09.137625  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.137645  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.138663  108421 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 12:10:09.138802  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.138807  108421 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.138850  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.138991  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.139010  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.139084  108421 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 12:10:09.140079  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.140268  108421 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 12:10:09.140438  108421 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.140615  108421 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 12:10:09.140623  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.140733  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.141693  108421 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 12:10:09.141748  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.141836  108421 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.142006  108421 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 12:10:09.142013  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.142033  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.142999  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.143051  108421 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 12:10:09.143092  108421 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 12:10:09.143219  108421 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.143432  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.143462  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.143893  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.144000  108421 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 12:10:09.144154  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.144293  108421 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 12:10:09.144399  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.144462  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.144977  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.145085  108421 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 12:10:09.145226  108421 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 12:10:09.145502  108421 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.145701  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.145723  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.147242  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.148003  108421 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 12:10:09.148240  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.148312  108421 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 12:10:09.148496  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.148519  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.149212  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.149652  108421 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 12:10:09.149751  108421 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 12:10:09.149834  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.150096  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.150128  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.150611  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.150884  108421 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 12:10:09.150963  108421 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 12:10:09.151017  108421 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.151210  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.151235  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.151849  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.152577  108421 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 12:10:09.152629  108421 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 12:10:09.153485  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.153661  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.153678  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.155301  108421 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 12:10:09.155339  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.155509  108421 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 12:10:09.155598  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.155620  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.156212  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.156347  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.156375  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.157766  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.157887  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.158052  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.158078  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.158989  108421 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 12:10:09.159019  108421 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 12:10:09.159337  108421 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.159550  108421 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.159621  108421 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 12:10:09.160672  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.161106  108421 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.161994  108421 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.162500  108421 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.163204  108421 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.163763  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.164003  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.164331  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.164829  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.165410  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.165740  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.166678  108421 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.167082  108421 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.167805  108421 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.168195  108421 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.169244  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.169607  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.169882  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.170166  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.170559  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.170932  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.171437  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.172455  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.173004  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.174200  108421 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.175181  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.175917  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.176378  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.177506  108421 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.178118  108421 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.178996  108421 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.179866  108421 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.181101  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.182215  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.182664  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.182935  108421 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 12:10:09.183044  108421 master.go:461] Enabling API group "authentication.k8s.io".
I0919 12:10:09.183198  108421 master.go:461] Enabling API group "authorization.k8s.io".
I0919 12:10:09.183590  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.184108  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.184251  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.185461  108421 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 12:10:09.185538  108421 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 12:10:09.185635  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.185796  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.185818  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.188969  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.189347  108421 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 12:10:09.189529  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.189710  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.189744  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.189743  108421 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 12:10:09.190584  108421 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 12:10:09.190612  108421 master.go:461] Enabling API group "autoscaling".
I0919 12:10:09.190785  108421 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.190937  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.190956  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.191123  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.191474  108421 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 12:10:09.193531  108421 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 12:10:09.193584  108421 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 12:10:09.193781  108421 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.193906  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.193922  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.194186  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.194456  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.195354  108421 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 12:10:09.195481  108421 master.go:461] Enabling API group "batch".
I0919 12:10:09.195778  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.195531  108421 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 12:10:09.196016  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.196035  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.198204  108421 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 12:10:09.198230  108421 master.go:461] Enabling API group "certificates.k8s.io".
I0919 12:10:09.198258  108421 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 12:10:09.198392  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.198529  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.198548  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.200395  108421 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 12:10:09.200646  108421 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 12:10:09.200741  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.200890  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.200910  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.201056  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.201580  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.202665  108421 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 12:10:09.202687  108421 master.go:461] Enabling API group "coordination.k8s.io".
I0919 12:10:09.202703  108421 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 12:10:09.202707  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.202757  108421 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 12:10:09.202853  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.202994  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.203012  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.203721  108421 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 12:10:09.203752  108421 master.go:461] Enabling API group "extensions".
I0919 12:10:09.203897  108421 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.204026  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.204028  108421 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 12:10:09.204046  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.204094  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.205167  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.205299  108421 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 12:10:09.205382  108421 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 12:10:09.205519  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.205651  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.205675  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.206016  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.208025  108421 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 12:10:09.208069  108421 master.go:461] Enabling API group "networking.k8s.io".
I0919 12:10:09.208103  108421 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.208263  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.208296  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.208345  108421 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 12:10:09.209089  108421 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 12:10:09.209110  108421 master.go:461] Enabling API group "node.k8s.io".
I0919 12:10:09.209242  108421 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.209334  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.209349  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.209474  108421 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 12:10:09.210829  108421 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 12:10:09.211184  108421 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.210866  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.210905  108421 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 12:10:09.211391  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.211627  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.212803  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.212972  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.213163  108421 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 12:10:09.213227  108421 master.go:461] Enabling API group "policy".
I0919 12:10:09.213268  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.213449  108421 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 12:10:09.213552  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.213576  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.215406  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.215652  108421 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 12:10:09.215767  108421 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 12:10:09.217019  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.217716  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.217969  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.218064  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.218893  108421 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 12:10:09.218946  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.219102  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.219124  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.219209  108421 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 12:10:09.220499  108421 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 12:10:09.220570  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.220677  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.220780  108421 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 12:10:09.220821  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.220839  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.221398  108421 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 12:10:09.221486  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.221499  108421 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 12:10:09.221585  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.221600  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.222372  108421 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 12:10:09.222466  108421 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 12:10:09.222582  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.222684  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.222694  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.222715  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.223185  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.223647  108421 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 12:10:09.223689  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.223801  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.223816  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.223862  108421 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 12:10:09.225141  108421 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 12:10:09.225326  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.225383  108421 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 12:10:09.225473  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.225490  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.226608  108421 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 12:10:09.226638  108421 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 12:10:09.226688  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.226902  108421 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 12:10:09.228242  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.228722  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.228853  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.228877  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.229869  108421 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 12:10:09.230038  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.230050  108421 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 12:10:09.230039  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.230223  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.230243  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.231346  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.231843  108421 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 12:10:09.231866  108421 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 12:10:09.231967  108421 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 12:10:09.232097  108421 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 12:10:09.232151  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.232307  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.232329  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.232439  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.232843  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.233886  108421 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 12:10:09.233949  108421 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 12:10:09.234230  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.234358  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.234487  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.234932  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.235655  108421 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 12:10:09.235683  108421 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 12:10:09.235696  108421 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.235855  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.235880  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.236820  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.237353  108421 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 12:10:09.237398  108421 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 12:10:09.237393  108421 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.237588  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.237608  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.238382  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.238706  108421 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 12:10:09.238748  108421 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 12:10:09.238859  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.238993  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.239012  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.240802  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.240835  108421 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 12:10:09.240912  108421 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 12:10:09.240987  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.241126  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.241147  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.241601  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.241959  108421 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 12:10:09.241978  108421 master.go:461] Enabling API group "storage.k8s.io".
I0919 12:10:09.242090  108421 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 12:10:09.242128  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.242270  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.242287  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.242789  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.243693  108421 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 12:10:09.243818  108421 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 12:10:09.244127  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.244391  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.244568  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.244712  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.245754  108421 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 12:10:09.245916  108421 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.246068  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.246099  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.246121  108421 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 12:10:09.247228  108421 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 12:10:09.247372  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.247569  108421 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 12:10:09.247864  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.247886  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.247936  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.248254  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.248675  108421 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 12:10:09.248738  108421 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 12:10:09.248823  108421 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.248961  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.248980  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.249956  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.250021  108421 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 12:10:09.250039  108421 master.go:461] Enabling API group "apps".
I0919 12:10:09.250069  108421 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 12:10:09.250074  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.250197  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.250214  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.250929  108421 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 12:10:09.250968  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.251027  108421 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 12:10:09.251103  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.251120  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.252360  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.253057  108421 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 12:10:09.253108  108421 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 12:10:09.253235  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.253377  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.253390  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.254304  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.254334  108421 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 12:10:09.254368  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.254498  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.254515  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.254544  108421 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 12:10:09.254518  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.255859  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.256059  108421 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 12:10:09.256079  108421 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 12:10:09.256098  108421 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 12:10:09.256127  108421 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.256671  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:09.256700  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:09.257028  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.258056  108421 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 12:10:09.258082  108421 master.go:461] Enabling API group "events.k8s.io".
I0919 12:10:09.258117  108421 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 12:10:09.258377  108421 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.258666  108421 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.258861  108421 watch_cache.go:405] Replace watchCache (rev: 48375) 
I0919 12:10:09.259093  108421 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.259298  108421 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.259488  108421 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.259742  108421 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.260119  108421 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.260273  108421 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.260543  108421 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.260671  108421 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.261748  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.262198  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.263165  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.263620  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.264429  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.264650  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.265348  108421 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.265575  108421 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.266227  108421 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.266472  108421 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:10:09.266555  108421 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 12:10:09.267297  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.267403  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.267615  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.268596  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.269313  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.269929  108421 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.270132  108421 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.271041  108421 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.271845  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.272092  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.272885  108421 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:10:09.272961  108421 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 12:10:09.273824  108421 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.274093  108421 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.274684  108421 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.275401  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.276088  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.276656  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.277521  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.278090  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.278716  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.279444  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.280193  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:10:09.280270  108421 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 12:10:09.280932  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.283075  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:10:09.283202  108421 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 12:10:09.283894  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.284967  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.285506  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.286211  108421 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.286771  108421 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.287403  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.288008  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:10:09.288088  108421 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 12:10:09.289191  108421 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.290158  108421 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.290498  108421 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.291319  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.291651  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.291894  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.292732  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.293074  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.293342  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.294307  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.294706  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.295052  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:10:09.295122  108421 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 12:10:09.295130  108421 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 12:10:09.295910  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.296566  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.297261  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.298080  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.299020  108421 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"20d11478-2743-45ef-ac7a-cf53d7c1fb25", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:10:09.306460  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.306599  108421 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 12:10:09.306635  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.306676  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.306714  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.306796  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.306991  108421 httplog.go:90] GET /healthz: (674.663µs) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:09.308796  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.432603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0919 12:10:09.311516  108421 httplog.go:90] GET /api/v1/services: (1.222605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0919 12:10:09.315882  108421 httplog.go:90] GET /api/v1/services: (918.994µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0919 12:10:09.318193  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.318346  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.318434  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.318501  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.318596  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.318785  108421 httplog.go:90] GET /healthz: (659.222µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.319700  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.02739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0919 12:10:09.320383  108421 httplog.go:90] GET /api/v1/services: (1.023815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50366]
I0919 12:10:09.321836  108421 httplog.go:90] POST /api/v1/namespaces: (1.641104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0919 12:10:09.322481  108421 httplog.go:90] GET /api/v1/services: (3.0893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.323485  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.128218ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0919 12:10:09.326454  108421 httplog.go:90] POST /api/v1/namespaces: (2.429034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.327804  108421 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (948.241µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.329669  108421 httplog.go:90] POST /api/v1/namespaces: (1.499188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.408879  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.408912  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.408932  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.408938  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.408947  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.408984  108421 httplog.go:90] GET /healthz: (247.7µs) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:09.419929  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.419967  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.419977  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.419987  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.419995  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.420031  108421 httplog.go:90] GET /healthz: (278.276µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.508784  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.508820  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.508831  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.508839  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.508847  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.508883  108421 httplog.go:90] GET /healthz: (236.541µs) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:09.519862  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.519902  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.519912  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.519918  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.519923  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.519958  108421 httplog.go:90] GET /healthz: (213.789µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.608800  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.608844  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.608857  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.608866  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.608875  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.608911  108421 httplog.go:90] GET /healthz: (278.312µs) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:09.620044  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.620078  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.620087  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.620094  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.620100  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.620145  108421 httplog.go:90] GET /healthz: (244.174µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.708882  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.708927  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.708939  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.708949  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.708960  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.709003  108421 httplog.go:90] GET /healthz: (281.757µs) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:09.719972  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.720011  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.720024  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.720033  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.720046  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.720077  108421 httplog.go:90] GET /healthz: (263.773µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.808819  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.808856  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.808869  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.808879  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.808888  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.808920  108421 httplog.go:90] GET /healthz: (266.135µs) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:09.819884  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.819921  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.819932  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.819941  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.819948  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.819979  108421 httplog.go:90] GET /healthz: (235.342µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:09.908821  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.908860  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.908869  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.908875  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.908883  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.908907  108421 httplog.go:90] GET /healthz: (232.55µs) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:09.919927  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:09.919963  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:09.919971  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:09.919978  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:09.919983  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:09.920005  108421 httplog.go:90] GET /healthz: (213.634µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:10.012445  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:10.012483  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.012495  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:10.012504  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:10.012512  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:10.012549  108421 httplog.go:90] GET /healthz: (280.227µs) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:10.020514  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:10.020566  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.020579  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:10.020590  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:10.020598  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:10.020654  108421 httplog.go:90] GET /healthz: (330.48µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:10.108805  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:10.108845  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.108856  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:10.108865  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:10.108872  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:10.108928  108421 httplog.go:90] GET /healthz: (277.078µs) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:10.119972  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:10:10.120014  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.120027  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:10.120037  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:10.120046  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:10.120085  108421 httplog.go:90] GET /healthz: (256.293µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:10.131684  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:10:10.131781  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:10:10.209824  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.209854  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:10.209861  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:10.209867  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:10.209966  108421 httplog.go:90] GET /healthz: (1.307523ms) 0 [Go-http-client/1.1 127.0.0.1:50362]
I0919 12:10:10.221030  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.221068  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:10.221076  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:10:10.221083  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:10:10.221129  108421 httplog.go:90] GET /healthz: (1.274537ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:10.304438  108421 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.129634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:10.304717  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.876833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50366]
I0919 12:10:10.305717  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.505764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.306926  108421 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (789.873µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.307011  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.652877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:10.308317  108421 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.958573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50366]
I0919 12:10:10.308536  108421 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 12:10:10.310525  108421 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.970872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.311098  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.311121  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:10:10.311121  108421 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.098013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0919 12:10:10.311135  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.311176  108421 httplog.go:90] GET /healthz: (2.314514ms) 0 [Go-http-client/1.1 127.0.0.1:50366]
I0919 12:10:10.312817  108421 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.31776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.312982  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (5.649049ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0919 12:10:10.313009  108421 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 12:10:10.313025  108421 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 12:10:10.314333  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (971.227µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.315460  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (699.667µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.316568  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (773.821µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.317626  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (709.743µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.318688  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (746.866µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.319788  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (777.583µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.321207  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.321237  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.321268  108421 httplog.go:90] GET /healthz: (1.594121ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.321349  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.159195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.323074  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.209446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.323396  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 12:10:10.324412  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (778.877µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.326120  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.332362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.326472  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 12:10:10.327560  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (883.646µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.329498  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.499839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.329766  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 12:10:10.330899  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (841.816µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.332630  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.358341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.332881  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 12:10:10.333972  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (851.986µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.335589  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.253908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.335793  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 12:10:10.336841  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (753.269µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.338710  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.481671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.339007  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 12:10:10.340044  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (899.822µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.341834  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.33229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.342002  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 12:10:10.343151  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (860.676µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.345567  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.152672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.345846  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 12:10:10.347072  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (839.383µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.349114  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.485946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.349517  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 12:10:10.350406  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (691.486µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.352374  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.513627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.352753  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 12:10:10.353901  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (868.508µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.355574  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.417375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.355852  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 12:10:10.356952  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (979.716µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.359168  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.700973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.359559  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 12:10:10.360704  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (840.165µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.362637  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.362944  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 12:10:10.363981  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (813.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.365740  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.423625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.365988  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 12:10:10.367329  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.03893ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.369390  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.530345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.369613  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 12:10:10.370561  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (749.841µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.372806  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.582471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.373045  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 12:10:10.374289  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (929.004µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.376256  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.498847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.376523  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 12:10:10.377634  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (883.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.379774  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.491556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.380113  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 12:10:10.381399  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.044092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.386032  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.890765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.386260  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 12:10:10.387376  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (854.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.389534  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.770392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.389811  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 12:10:10.390809  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (816.081µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.395221  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.860188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.395471  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 12:10:10.396568  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (914.986µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.399024  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.790088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.399254  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 12:10:10.400392  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (908.789µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.402294  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.402688  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 12:10:10.403919  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.026078ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.405670  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.364262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.405875  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 12:10:10.410447  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.234631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.410513  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.410757  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.410930  108421 httplog.go:90] GET /healthz: (1.718547ms) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:10.412907  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.56609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.413117  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 12:10:10.414358  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (940.03µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.416159  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.41289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.417770  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 12:10:10.418845  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (875.345µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.420364  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.420391  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.420439  108421 httplog.go:90] GET /healthz: (850.09µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.421061  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.588461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.421278  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 12:10:10.422179  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (710.829µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.424680  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.300477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.425036  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 12:10:10.426231  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (952.331µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.428221  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.515057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.428564  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 12:10:10.430008  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.217538ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.432147  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.514112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.432446  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 12:10:10.433445  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (800.604µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.435793  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.888981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.436015  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 12:10:10.437263  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (999.477µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.440862  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.167783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.441198  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 12:10:10.442499  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.012053ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.444570  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.677979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.444895  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 12:10:10.446581  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.369861ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.448774  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.728212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.449163  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 12:10:10.450336  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (933.724µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.452310  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.484426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.452585  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 12:10:10.453634  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (802.707µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.455395  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.340018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.455612  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 12:10:10.456796  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (986.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.459059  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.821422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.459330  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 12:10:10.460508  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (943.67µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.462537  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.53527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.462777  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 12:10:10.463870  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (873.097µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.465783  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.445406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.465995  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 12:10:10.467168  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (988.813µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.468844  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.257474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.469045  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 12:10:10.470006  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (786.202µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.472054  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.514772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.472294  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 12:10:10.473988  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.497828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.475965  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.364186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.476284  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 12:10:10.477220  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (727.589µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.479007  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.26895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.479305  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 12:10:10.480515  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (965.965µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.484597  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.632243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.484904  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 12:10:10.485792  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (757.744µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.487637  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.487853  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 12:10:10.488876  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (819.809µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.491013  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.831078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.491377  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 12:10:10.493741  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (2.077892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.495994  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.496669  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 12:10:10.497798  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (788.024µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.500319  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.05132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.500616  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 12:10:10.501587  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (754.061µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.503286  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.503543  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 12:10:10.504527  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (817.958µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.507206  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.221202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.507393  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 12:10:10.508497  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (890.342µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.509253  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.509281  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.509334  108421 httplog.go:90] GET /healthz: (918.086µs) 0 [Go-http-client/1.1 127.0.0.1:50572]
I0919 12:10:10.521164  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.521199  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.521241  108421 httplog.go:90] GET /healthz: (1.27785ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.524931  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.894143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.525245  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 12:10:10.544797  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.634949ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.564920  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.891274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.565206  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 12:10:10.585206  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.514669ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.605462  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.036071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.605744  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 12:10:10.609988  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.610019  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.610072  108421 httplog.go:90] GET /healthz: (1.437767ms) 0 [Go-http-client/1.1 127.0.0.1:50572]
I0919 12:10:10.620804  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.620840  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.620893  108421 httplog.go:90] GET /healthz: (1.07594ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.624348  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.337114ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.645218  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.159081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.645540  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 12:10:10.664671  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.672621ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.685940  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.518903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.686130  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 12:10:10.704260  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.154847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.709611  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.709812  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.710168  108421 httplog.go:90] GET /healthz: (1.389621ms) 0 [Go-http-client/1.1 127.0.0.1:50572]
I0919 12:10:10.721075  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.721110  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.721168  108421 httplog.go:90] GET /healthz: (1.293002ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.725253  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.259969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.725535  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 12:10:10.744670  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.572769ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.766043  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.029085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.766377  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 12:10:10.786994  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (2.712724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.805141  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.921061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.805476  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 12:10:10.809503  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.809563  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.809640  108421 httplog.go:90] GET /healthz: (1.074088ms) 0 [Go-http-client/1.1 127.0.0.1:50572]
I0919 12:10:10.820942  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.820976  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.821075  108421 httplog.go:90] GET /healthz: (1.259104ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.824116  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.140161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.845225  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.845486  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 12:10:10.864142  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.155787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.885287  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.225935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.885538  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 12:10:10.904768  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.690789ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.919281  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.919653  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.919915  108421 httplog.go:90] GET /healthz: (3.271928ms) 0 [Go-http-client/1.1 127.0.0.1:50572]
I0919 12:10:10.932584  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:10.932616  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:10.932658  108421 httplog.go:90] GET /healthz: (1.799977ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.932965  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.923324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0919 12:10:10.933389  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 12:10:10.946302  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.837702ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.965343  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.286686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:10.965623  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 12:10:10.984347  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.315299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.005655  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.628308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.005911  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 12:10:11.010591  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.010622  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.010664  108421 httplog.go:90] GET /healthz: (2.108574ms) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.020840  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.020889  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.020920  108421 httplog.go:90] GET /healthz: (1.202502ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.024325  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.3952ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.045785  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.538367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.046068  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 12:10:11.064939  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.871201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.094756  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (10.021486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.095146  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 12:10:11.103937  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (952.355µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.109984  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.110012  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.110081  108421 httplog.go:90] GET /healthz: (1.424702ms) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.120961  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.120988  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.121022  108421 httplog.go:90] GET /healthz: (1.132088ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.124881  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.865221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.125227  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 12:10:11.144236  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.246667ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.165480  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.416255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.165821  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 12:10:11.184504  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.488159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.205081  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.029569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.205319  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 12:10:11.209444  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.209479  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.209525  108421 httplog.go:90] GET /healthz: (913.35µs) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.220984  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.221089  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.221155  108421 httplog.go:90] GET /healthz: (1.337886ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.224312  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.318679ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.245594  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.494545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.245840  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 12:10:11.264255  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.199349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.285741  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.721238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.286088  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 12:10:11.304028  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.030948ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.309737  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.309772  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.309819  108421 httplog.go:90] GET /healthz: (1.237875ms) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.321040  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.321263  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.321449  108421 httplog.go:90] GET /healthz: (1.588583ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.325121  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.131054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.325547  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 12:10:11.344389  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.312622ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.365292  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.236708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.365675  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 12:10:11.384558  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.47849ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.404979  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.004133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.405266  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 12:10:11.409495  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.409530  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.409576  108421 httplog.go:90] GET /healthz: (995.481µs) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.420946  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.420982  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.421033  108421 httplog.go:90] GET /healthz: (1.213711ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.424061  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.096841ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.445196  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.18787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.445491  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 12:10:11.464175  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.19037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.485888  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.821741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.486127  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 12:10:11.504083  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.130534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.509257  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.509477  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.509692  108421 httplog.go:90] GET /healthz: (1.159109ms) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.520602  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.520643  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.520683  108421 httplog.go:90] GET /healthz: (924.52µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.524765  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.772294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.525472  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 12:10:11.544634  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.416044ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.565073  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.040997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.565285  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 12:10:11.584413  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.302848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.605452  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.40705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.605726  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 12:10:11.609718  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.609859  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.609904  108421 httplog.go:90] GET /healthz: (1.384905ms) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.620826  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.620863  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.620911  108421 httplog.go:90] GET /healthz: (1.141583ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.624063  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.073837ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.645626  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.146299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.646183  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 12:10:11.664558  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.547508ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.689353  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.040521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.689699  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 12:10:11.704206  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.222829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.709463  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.709676  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.709859  108421 httplog.go:90] GET /healthz: (1.336135ms) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.721093  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.721403  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.721618  108421 httplog.go:90] GET /healthz: (1.828435ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.724993  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.008828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.725460  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 12:10:11.744411  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.263044ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.765880  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.726615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.766179  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 12:10:11.784912  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.1694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.805330  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.156445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.805729  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 12:10:11.809727  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.809769  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.809863  108421 httplog.go:90] GET /healthz: (1.229617ms) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.821244  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.821280  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.821324  108421 httplog.go:90] GET /healthz: (1.346571ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.824551  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.619309ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.845210  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.11327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.845623  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 12:10:11.864455  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.330349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.886744  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.296506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.887060  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 12:10:11.904307  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.275025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.909732  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.910005  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.910258  108421 httplog.go:90] GET /healthz: (1.525636ms) 0 [Go-http-client/1.1 127.0.0.1:50574]
I0919 12:10:11.921312  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:11.921345  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:11.921387  108421 httplog.go:90] GET /healthz: (1.613422ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.924995  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.969534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.925290  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 12:10:11.944657  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.593285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.965680  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.600094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:11.965908  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 12:10:11.984784  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.593772ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:12.015511  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (12.458884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:12.015843  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.015865  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.015901  108421 httplog.go:90] GET /healthz: (7.069328ms) 0 [Go-http-client/1.1 127.0.0.1:50666]
I0919 12:10:12.015938  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 12:10:12.021380  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.021413  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.021466  108421 httplog.go:90] GET /healthz: (991.334µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.024145  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.181439ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.045259  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.20168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.045597  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 12:10:12.064446  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.408254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.066170  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.290596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.085582  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.557896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.088917  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 12:10:12.104201  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.126946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.105913  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.14962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.109238  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.109267  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.109302  108421 httplog.go:90] GET /healthz: (815.686µs) 0 [Go-http-client/1.1 127.0.0.1:50666]
I0919 12:10:12.121891  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.121922  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.121970  108421 httplog.go:90] GET /healthz: (1.104556ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.124821  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.822507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.124990  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 12:10:12.144220  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.104557ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.147078  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.225865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.165350  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.253899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.165912  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 12:10:12.187293  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.224018ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.189030  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.302261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.205102  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.982916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.205612  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 12:10:12.209918  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.209949  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.209997  108421 httplog.go:90] GET /healthz: (1.282525ms) 0 [Go-http-client/1.1 127.0.0.1:50666]
I0919 12:10:12.220917  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.220952  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.220997  108421 httplog.go:90] GET /healthz: (1.159615ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.224525  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.490306ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.226056  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.12166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.244955  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.915538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.245394  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 12:10:12.264696  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.609309ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.266557  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.336352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.285377  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.078478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.289000  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 12:10:12.304667  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.642508ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.306665  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.529354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.309771  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.309810  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.309861  108421 httplog.go:90] GET /healthz: (1.07996ms) 0 [Go-http-client/1.1 127.0.0.1:50666]
I0919 12:10:12.320994  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.321030  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.321076  108421 httplog.go:90] GET /healthz: (1.246933ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.325394  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.427771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.325912  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 12:10:12.344276  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.25363ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.346190  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.423179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.365216  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.224142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.365606  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 12:10:12.384261  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.224268ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.388664  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.577355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.404850  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.803373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.405113  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 12:10:12.409640  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.409672  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.409711  108421 httplog.go:90] GET /healthz: (1.1243ms) 0 [Go-http-client/1.1 127.0.0.1:50666]
I0919 12:10:12.420830  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.420861  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.420897  108421 httplog.go:90] GET /healthz: (1.173774ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.424169  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.205076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.425763  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.05708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.445235  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.153472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.445618  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 12:10:12.464487  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.435246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.466256  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.272727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.486289  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.118864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.486603  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 12:10:12.504151  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.149855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.505997  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.356637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.509374  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.509404  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.509498  108421 httplog.go:90] GET /healthz: (991.729µs) 0 [Go-http-client/1.1 127.0.0.1:50666]
I0919 12:10:12.522162  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:10:12.522191  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:10:12.522235  108421 httplog.go:90] GET /healthz: (2.469291ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.525020  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.112682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.525246  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 12:10:12.544169  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.191468ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.545705  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.035677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.564971  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.924128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.565510  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 12:10:12.585647  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.234705ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.587989  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.249122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.605134  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.079634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.605368  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 12:10:12.609892  108421 httplog.go:90] GET /healthz: (1.179303ms) 200 [Go-http-client/1.1 127.0.0.1:50666]
I0919 12:10:12.612914  108421 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.078995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
W0919 12:10:12.613366  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.613633  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.613752  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.613860  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.613971  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.614049  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.614132  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.614193  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.614316  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.614375  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:10:12.614489  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:10:12.616125  108421 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.333732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.617612  108421 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 12:10:12.617764  108421 factory.go:321] Registering predicate: PredicateOne
I0919 12:10:12.617851  108421 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 12:10:12.617914  108421 factory.go:321] Registering predicate: PredicateTwo
I0919 12:10:12.617966  108421 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 12:10:12.618018  108421 factory.go:336] Registering priority: PriorityOne
I0919 12:10:12.618085  108421 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 12:10:12.618144  108421 factory.go:336] Registering priority: PriorityTwo
I0919 12:10:12.618259  108421 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 12:10:12.618404  108421 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 12:10:12.620577  108421 httplog.go:90] GET /healthz: (844.154µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:12.620882  108421 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.950713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
W0919 12:10:12.621343  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:10:12.622032  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.07676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:12.622725  108421 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (978.96µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.623122  108421 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 12:10:12.623151  108421 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 12:10:12.623161  108421 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 12:10:12.623166  108421 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 12:10:12.623575  108421 httplog.go:90] POST /api/v1/namespaces: (1.276733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:12.624905  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (788.925µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:12.625981  108421 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.448662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
W0919 12:10:12.626380  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:10:12.628025  108421 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (1.231387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.628230  108421 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 12:10:12.628250  108421 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 12:10:12.628995  108421 httplog.go:90] POST /api/v1/namespaces/default/services: (3.810392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:12.630769  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.312255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0919 12:10:12.630913  108421 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.207963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
W0919 12:10:12.631241  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:10:12.631672  108421 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (559.467µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
E0919 12:10:12.631887  108421 controller.go:224] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0919 12:10:12.632759  108421 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (1.162051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.633232  108421 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 12:10:12.633274  108421 factory.go:321] Registering predicate: PredicateOne
I0919 12:10:12.633284  108421 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 12:10:12.633291  108421 factory.go:321] Registering predicate: PredicateTwo
I0919 12:10:12.633296  108421 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 12:10:12.633301  108421 factory.go:336] Registering priority: PriorityOne
I0919 12:10:12.633307  108421 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 12:10:12.633316  108421 factory.go:336] Registering priority: PriorityTwo
I0919 12:10:12.633320  108421 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 12:10:12.633325  108421 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 12:10:12.635448  108421 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.654648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
W0919 12:10:12.635850  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:10:12.637261  108421 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (972.227µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:12.637942  108421 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 12:10:12.638073  108421 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 12:10:12.638132  108421 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 12:10:12.638185  108421 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 12:10:12.810852  108421 request.go:538] Throttling request took 172.36552ms, request: POST:http://127.0.0.1:43783/api/v1/namespaces/kube-system/configmaps
I0919 12:10:12.813212  108421 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.927142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
W0919 12:10:12.813541  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:10:13.010993  108421 request.go:538] Throttling request took 197.235201ms, request: GET:http://127.0.0.1:43783/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I0919 12:10:13.012814  108421 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (1.514021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:13.013304  108421 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 12:10:13.013332  108421 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 12:10:13.210952  108421 request.go:538] Throttling request took 197.238884ms, request: DELETE:http://127.0.0.1:43783/api/v1/nodes
I0919 12:10:13.212846  108421 httplog.go:90] DELETE /api/v1/nodes: (1.596431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0919 12:10:13.213129  108421 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 12:10:13.215029  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.367204ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
--- FAIL: TestSchedulerCreationFromConfigMap (4.08s)
    scheduler_test.go:283: Expected predicates map[PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:283: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:283: Expected predicates map[PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:283: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-120028.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m20s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0919 12:11:34.462011  108421 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (140.30s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-120028.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0919 12:12:44.705376  108421 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 12:12:44.705440  108421 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 12:12:44.705453  108421 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 12:12:44.705463  108421 master.go:259] Using reconciler: 
I0919 12:12:44.706883  108421 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.707036  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.707140  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.708126  108421 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 12:12:44.708177  108421 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.708202  108421 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 12:12:44.708490  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.708523  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.709671  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.709876  108421 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 12:12:44.709923  108421 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.709950  108421 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 12:12:44.710053  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.710124  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.711065  108421 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 12:12:44.711180  108421 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 12:12:44.711270  108421 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.711410  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.711648  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.711749  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.712008  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.713338  108421 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 12:12:44.713390  108421 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 12:12:44.713573  108421 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.713718  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.713742  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.715014  108421 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 12:12:44.715076  108421 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 12:12:44.715374  108421 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.715496  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.715507  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.715513  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.716086  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.716370  108421 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 12:12:44.716473  108421 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 12:12:44.716774  108421 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.717068  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.717194  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.717273  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.718037  108421 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 12:12:44.718144  108421 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 12:12:44.718235  108421 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.718386  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.718405  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.718901  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.719030  108421 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 12:12:44.719122  108421 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 12:12:44.719274  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.719409  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.719454  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.720134  108421 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 12:12:44.720234  108421 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 12:12:44.720161  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.720397  108421 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.720720  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.720755  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.721608  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.722124  108421 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 12:12:44.722194  108421 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 12:12:44.722337  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.722471  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.722502  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.722910  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.723242  108421 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 12:12:44.723359  108421 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 12:12:44.723389  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.723528  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.723552  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.724222  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.724359  108421 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 12:12:44.724401  108421 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 12:12:44.724571  108421 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.724695  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.724794  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.725328  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.725570  108421 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 12:12:44.725632  108421 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 12:12:44.725755  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.725896  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.725933  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.726730  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.727154  108421 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 12:12:44.727221  108421 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 12:12:44.727460  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.727687  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.727770  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.728118  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.728484  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.728518  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.729224  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.729343  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.729366  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.729923  108421 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 12:12:44.729955  108421 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 12:12:44.730026  108421 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 12:12:44.730550  108421 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.730803  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.730797  108421 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.731651  108421 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.732815  108421 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.733733  108421 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.734592  108421 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.735093  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.735279  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.735565  108421 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.736138  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.736885  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.737509  108421 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.738451  108421 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.738796  108421 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.739454  108421 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.739717  108421 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.740538  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.740785  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.740947  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.741240  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.741492  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.741648  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.741846  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.743070  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.743381  108421 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.744375  108421 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.745300  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.745739  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.746104  108421 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.747280  108421 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.747653  108421 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.748462  108421 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.749316  108421 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.750078  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.751443  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.751772  108421 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.751901  108421 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 12:12:44.751927  108421 master.go:461] Enabling API group "authentication.k8s.io".
I0919 12:12:44.751946  108421 master.go:461] Enabling API group "authorization.k8s.io".
I0919 12:12:44.752158  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.752361  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.752396  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.753605  108421 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 12:12:44.753715  108421 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 12:12:44.753830  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.754107  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.754195  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.754883  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.755259  108421 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 12:12:44.755403  108421 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 12:12:44.755498  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.755653  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.755684  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.756515  108421 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 12:12:44.756550  108421 master.go:461] Enabling API group "autoscaling".
I0919 12:12:44.756575  108421 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 12:12:44.756683  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.756742  108421 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.756894  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.756937  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.757552  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.757825  108421 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 12:12:44.757864  108421 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 12:12:44.758025  108421 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.758190  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.758223  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.758927  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.759046  108421 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 12:12:44.759078  108421 master.go:461] Enabling API group "batch".
I0919 12:12:44.759131  108421 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 12:12:44.759275  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.759435  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.759463  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.759980  108421 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 12:12:44.760008  108421 master.go:461] Enabling API group "certificates.k8s.io".
I0919 12:12:44.760191  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.760307  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.760324  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.760405  108421 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 12:12:44.761247  108421 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 12:12:44.761401  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.761772  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.761793  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.761877  108421 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 12:12:44.763266  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.763921  108421 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 12:12:44.763982  108421 master.go:461] Enabling API group "coordination.k8s.io".
I0919 12:12:44.763999  108421 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 12:12:44.764198  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.764306  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.764323  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.764410  108421 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 12:12:44.764743  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.765209  108421 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 12:12:44.765244  108421 master.go:461] Enabling API group "extensions".
I0919 12:12:44.765249  108421 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 12:12:44.765465  108421 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.765632  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.765668  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.765776  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.765906  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.766562  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.766612  108421 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 12:12:44.766652  108421 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 12:12:44.766776  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.767186  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.767213  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.767652  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.768378  108421 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 12:12:44.768399  108421 master.go:461] Enabling API group "networking.k8s.io".
I0919 12:12:44.768452  108421 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 12:12:44.768449  108421 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.768585  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.768602  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.769205  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.769244  108421 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 12:12:44.769269  108421 master.go:461] Enabling API group "node.k8s.io".
I0919 12:12:44.769482  108421 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.769516  108421 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 12:12:44.769579  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.769591  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.770222  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.770230  108421 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 12:12:44.770281  108421 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 12:12:44.770502  108421 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.770639  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.770662  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.771046  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.771327  108421 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 12:12:44.771343  108421 master.go:461] Enabling API group "policy".
I0919 12:12:44.771373  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.771409  108421 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 12:12:44.771529  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.771546  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.772060  108421 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 12:12:44.772144  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.772220  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.772346  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.772364  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.772450  108421 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 12:12:44.773298  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.773389  108421 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 12:12:44.773442  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.773526  108421 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 12:12:44.773572  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.773590  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.774413  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.774710  108421 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 12:12:44.774799  108421 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 12:12:44.774890  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.775020  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.775038  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.775698  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.775700  108421 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 12:12:44.775719  108421 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 12:12:44.775782  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.775915  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.775933  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.776779  108421 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 12:12:44.776809  108421 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 12:12:44.776997  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.777103  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.777123  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.777763  108421 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 12:12:44.777797  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.777927  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.777924  108421 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 12:12:44.777946  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.778011  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.778097  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.778941  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.779605  108421 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 12:12:44.779669  108421 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 12:12:44.779807  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.779956  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.779976  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.780918  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.781848  108421 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 12:12:44.781879  108421 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 12:12:44.781884  108421 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 12:12:44.782793  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.784161  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.784259  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.784272  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.784899  108421 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 12:12:44.785005  108421 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 12:12:44.785136  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.785239  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.785253  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.786008  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.786117  108421 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 12:12:44.786133  108421 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 12:12:44.786173  108421 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 12:12:44.786279  108421 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 12:12:44.786486  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.786619  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.786633  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.787236  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.787435  108421 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 12:12:44.787586  108421 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 12:12:44.787612  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.787746  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.787770  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.788475  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.789176  108421 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 12:12:44.789210  108421 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.789231  108421 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 12:12:44.789301  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.789320  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.790070  108421 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 12:12:44.790111  108421 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.790116  108421 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 12:12:44.790216  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.790221  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.790228  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.790791  108421 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 12:12:44.790920  108421 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 12:12:44.790974  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.791071  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.791082  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.791576  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.791785  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.791810  108421 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 12:12:44.791930  108421 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 12:12:44.792136  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.792270  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.792298  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.792959  108421 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 12:12:44.792989  108421 master.go:461] Enabling API group "storage.k8s.io".
I0919 12:12:44.793183  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.793207  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.793365  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.793375  108421 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 12:12:44.793396  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.794012  108421 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 12:12:44.794066  108421 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 12:12:44.794136  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.794383  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.794399  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.794509  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.795352  108421 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 12:12:44.795385  108421 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 12:12:44.795546  108421 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.795658  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.795688  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.796238  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.796259  108421 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 12:12:44.796243  108421 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 12:12:44.796448  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.796913  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.797896  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.798053  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.798077  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.798793  108421 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 12:12:44.798941  108421 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 12:12:44.799012  108421 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.799166  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.799186  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.799702  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.800224  108421 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 12:12:44.800257  108421 master.go:461] Enabling API group "apps".
I0919 12:12:44.800276  108421 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 12:12:44.800293  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.800523  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.800557  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.801186  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.801339  108421 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 12:12:44.801378  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.801451  108421 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 12:12:44.801539  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.801564  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.802349  108421 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 12:12:44.802407  108421 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 12:12:44.802395  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.802579  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.802593  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.803228  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.803586  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.803604  108421 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 12:12:44.803639  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.803680  108421 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 12:12:44.803871  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.803896  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.804492  108421 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 12:12:44.804552  108421 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 12:12:44.804596  108421 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.804677  108421 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 12:12:44.804859  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:44.804888  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:44.805006  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.805483  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.805814  108421 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 12:12:44.805843  108421 master.go:461] Enabling API group "events.k8s.io".
I0919 12:12:44.806031  108421 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 12:12:44.806254  108421 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.806625  108421 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.806781  108421 watch_cache.go:405] Replace watchCache (rev: 59826) 
I0919 12:12:44.807064  108421 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.807236  108421 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.807550  108421 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.807714  108421 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.807952  108421 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.808135  108421 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.808298  108421 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.808470  108421 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.809365  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.809853  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.810740  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.811022  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.811785  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.812126  108421 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.812988  108421 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.813335  108421 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.813988  108421 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.814299  108421 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:12:44.814363  108421 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 12:12:44.815035  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.815252  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.815561  108421 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.816250  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.817247  108421 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.818283  108421 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.818633  108421 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.819440  108421 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.820296  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.820864  108421 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.821476  108421 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:12:44.821547  108421 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 12:12:44.822295  108421 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.822531  108421 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.823006  108421 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.823515  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.824145  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.824646  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.825186  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.825784  108421 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.826236  108421 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.826904  108421 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.827631  108421 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:12:44.827700  108421 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 12:12:44.828176  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.828732  108421 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:12:44.828835  108421 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 12:12:44.829296  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.829823  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.830037  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.830471  108421 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.830680  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:44.830703  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:44.830934  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:44.830964  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:44.831053  108421 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.831668  108421 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.832119  108421 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:12:44.832170  108421 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 12:12:44.832835  108421 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.833365  108421 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.833570  108421 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.833636  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:44.834253  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.834488  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.834679  108421 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.835381  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.835668  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.835997  108421 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.836697  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.837045  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.837316  108421 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 12:12:44.837381  108421 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 12:12:44.837389  108421 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 12:12:44.838266  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.839002  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.839695  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:44.839739  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:44.839876  108421 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.840731  108421 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.841645  108421 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3402e26a-010b-4204-9d3e-bc51f9b0f91b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 12:12:44.844789  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:44.844824  108421 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 12:12:44.844834  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:44.844843  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:44.844851  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:44.844860  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:44.844900  108421 httplog.go:90] GET /healthz: (248.11µs) 0 [Go-http-client/1.1 127.0.0.1:42244]
I0919 12:12:44.846167  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.451241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:44.848658  108421 httplog.go:90] GET /api/v1/services: (1.017278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:44.852799  108421 httplog.go:90] GET /api/v1/services: (1.128452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:44.855051  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:44.855081  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:44.855093  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:44.855102  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:44.855110  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:44.855136  108421 httplog.go:90] GET /healthz: (166.452µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:44.856223  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.119958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42244]
I0919 12:12:44.857394  108421 httplog.go:90] GET /api/v1/services: (1.161125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42250]
I0919 12:12:44.857483  108421 httplog.go:90] GET /api/v1/services: (1.368027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:44.858750  108421 httplog.go:90] POST /api/v1/namespaces: (2.099649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42244]
I0919 12:12:44.859939  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (829.821µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:44.861694  108421 httplog.go:90] POST /api/v1/namespaces: (1.326488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:44.863054  108421 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (868.589µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:44.864955  108421 httplog.go:90] POST /api/v1/namespaces: (1.487569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:44.945676  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:44.945713  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:44.945725  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:44.945733  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:44.945742  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:44.945782  108421 httplog.go:90] GET /healthz: (224.771µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:44.956600  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:44.956634  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:44.956644  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:44.956650  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:44.956656  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:44.956707  108421 httplog.go:90] GET /healthz: (237.325µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.045747  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.045787  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.045796  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.045803  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.045809  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.045844  108421 httplog.go:90] GET /healthz: (232.982µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:45.056520  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.056568  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.056581  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.056590  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.056598  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.056647  108421 httplog.go:90] GET /healthz: (333.397µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.145645  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.145691  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.145703  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.145712  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.145720  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.145754  108421 httplog.go:90] GET /healthz: (258.813µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:45.156327  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.156364  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.156373  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.156379  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.156385  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.156450  108421 httplog.go:90] GET /healthz: (221.083µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.188063  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.191848  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.192066  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.191955  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.192029  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.192208  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.245736  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.245902  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.245936  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.245998  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.246033  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.246185  108421 httplog.go:90] GET /healthz: (615.89µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:45.256411  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.256479  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.256492  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.256503  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.256512  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.256546  108421 httplog.go:90] GET /healthz: (299.102µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.265749  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.265760  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.267329  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.267488  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.267636  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.267827  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.345901  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.345938  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.345950  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.345960  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.345967  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.346001  108421 httplog.go:90] GET /healthz: (371.174µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:45.356386  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.356494  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.356510  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.356519  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.356527  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.356604  108421 httplog.go:90] GET /healthz: (370.456µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.391706  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.445658  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.445697  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.445706  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.445713  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.445719  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.445748  108421 httplog.go:90] GET /healthz: (241.529µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:45.456571  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.456613  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.456626  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.456643  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.456652  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.456714  108421 httplog.go:90] GET /healthz: (387.886µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.470631  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.545711  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.545760  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.545770  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.545776  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.545783  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.545812  108421 httplog.go:90] GET /healthz: (234.236µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:45.556383  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.556432  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.556441  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.556448  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.556454  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.556487  108421 httplog.go:90] GET /healthz: (278.527µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.645644  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.645693  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.645702  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.645713  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.645720  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.645757  108421 httplog.go:90] GET /healthz: (266.441µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:45.656374  108421 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 12:12:45.656409  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.656442  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.656450  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.656456  108421 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.656486  108421 httplog.go:90] GET /healthz: (233.097µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.705571  108421 client.go:361] parsed scheme: "endpoint"
I0919 12:12:45.705665  108421 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 12:12:45.746826  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.746996  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.747050  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.747084  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.747285  108421 httplog.go:90] GET /healthz: (1.701148ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:45.757274  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.757305  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.757312  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.757319  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.757363  108421 httplog.go:90] GET /healthz: (1.062422ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.830841  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.830932  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.831124  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.831150  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.833791  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.839860  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.839897  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:45.846344  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.846521  108421 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 12:12:45.846608  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.67287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.846399  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.137598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42256]
I0919 12:12:45.846621  108421 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 12:12:45.846793  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 12:12:45.846839  108421 httplog.go:90] GET /healthz: (1.105897ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:45.846395  108421 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.474848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42250]
I0919 12:12:45.847875  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (783.925µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.848813  108421 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.499718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.849107  108421 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 12:12:45.849734  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (891.999µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.849790  108421 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.364435ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42256]
I0919 12:12:45.850372  108421 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (747.763µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.851142  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (880.119µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42256]
I0919 12:12:45.851585  108421 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.405681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.851941  108421 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.187009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.852137  108421 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 12:12:45.852164  108421 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 12:12:45.852532  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (948.467µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42256]
I0919 12:12:45.853696  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (740.212µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.854754  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (638.33µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.855957  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (953.982µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.856966  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.856992  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:45.857026  108421 httplog.go:90] GET /healthz: (766.844µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.857238  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (937.123µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.858224  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (694.215µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.860045  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.860302  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 12:12:45.861204  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (706.287µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.862832  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.258506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.862990  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 12:12:45.863861  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (689.377µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.865834  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.572369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.866089  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 12:12:45.867318  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (843.609µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.868880  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.192745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.869048  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 12:12:45.869880  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (725.193µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.871395  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.22603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.871586  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 12:12:45.872465  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (713.663µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.874168  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.247109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.874468  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 12:12:45.875380  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (709.775µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.876998  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.155072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.877188  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 12:12:45.878092  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (720.095µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.879656  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.178939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.879858  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 12:12:45.880716  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (661.924µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.882526  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.414086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.882816  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 12:12:45.883715  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (704.777µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.885652  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.885929  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 12:12:45.886844  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (743.332µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.888467  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.209631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.888675  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 12:12:45.889593  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (759.297µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.891321  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.386917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.891674  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 12:12:45.892639  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (724.683µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.894316  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.268374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.894543  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 12:12:45.895382  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (685.197µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.897138  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.369856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.897391  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 12:12:45.898436  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (811.552µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.900118  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.215799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.900304  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 12:12:45.901258  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (768.547µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.902888  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.254702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.903117  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 12:12:45.904048  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (701.371µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.905602  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.224794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.905817  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 12:12:45.906836  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (821.126µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.908649  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.396032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.908906  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 12:12:45.909810  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (733.224µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.911444  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.250215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.911633  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 12:12:45.912665  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (840.231µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.914314  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.280898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.914578  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 12:12:45.915837  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.057582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.917558  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.394537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.917768  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 12:12:45.918667  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (700.903µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.920550  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.506409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.920865  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 12:12:45.921871  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (827.307µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.923775  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.471308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.924075  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 12:12:45.925022  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (716.723µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.926576  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.194221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.926806  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 12:12:45.927813  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (825.789µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.929532  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.243662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.929826  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 12:12:45.930840  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (774.552µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.932794  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.405686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.933143  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 12:12:45.934324  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (908.894µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.936234  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.391326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.936491  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 12:12:45.937520  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (811.185µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.939247  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.2229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.939555  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 12:12:45.940566  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (745.133µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.942611  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.615921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.942860  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 12:12:45.943892  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (811.605µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.945666  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.372468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.946001  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 12:12:45.946008  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.946037  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:45.946133  108421 httplog.go:90] GET /healthz: (803.128µs) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:45.947067  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (724.08µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.948925  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.468589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.949158  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 12:12:45.950188  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (836.289µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.951944  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.368305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.952189  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 12:12:45.953251  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (757.446µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.955035  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.348784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.955262  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 12:12:45.956223  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (760.999µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.956841  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:45.956863  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:45.956890  108421 httplog.go:90] GET /healthz: (758.353µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:45.958033  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.352632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.958294  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 12:12:45.959212  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (678.201µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.961139  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.455396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.961495  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 12:12:45.962567  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (748.408µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.964698  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.625534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.964977  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 12:12:45.966004  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (719.626µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.967724  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.305707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.967966  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 12:12:45.968990  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (825.819µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.970729  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.321665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.971608  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 12:12:45.972553  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (695.352µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.974210  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.288845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.974398  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 12:12:45.975268  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (693.293µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.977085  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.354471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.977338  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 12:12:45.978299  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (705.3µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.980491  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.738542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.980785  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 12:12:45.981743  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (739.953µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.983292  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.185819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.983479  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 12:12:45.984367  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (748.287µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.986003  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.282299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.986277  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 12:12:45.987342  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (780.757µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.989304  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.484034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.989620  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 12:12:45.990683  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (695.466µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.992593  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.437965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.992840  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 12:12:45.993723  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (675.151µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.995578  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.392932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.995858  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 12:12:45.996886  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (756.254µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.998701  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.350252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:45.998939  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 12:12:45.999946  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (823.465µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.001782  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.44401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.002008  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 12:12:46.002978  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (801.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.004781  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.296249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.005051  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 12:12:46.006050  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (800.953µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.027188  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.02567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.027475  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 12:12:46.046429  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.046462  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.046578  108421 httplog.go:90] GET /healthz: (1.178439ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:46.047400  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (2.22152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.057072  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.057105  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.057157  108421 httplog.go:90] GET /healthz: (984.172µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.067127  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.973798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.067452  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 12:12:46.086466  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.331294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.107178  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.033827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.107555  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 12:12:46.126616  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.463347ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.146916  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.770334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.147040  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.147058  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.147088  108421 httplog.go:90] GET /healthz: (1.616531ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:46.147262  108421 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 12:12:46.157383  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.157412  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.157496  108421 httplog.go:90] GET /healthz: (1.292545ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.166190  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.083164ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.187483  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.272934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.187811  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 12:12:46.188273  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.192226  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.192254  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.192274  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.192259  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.192434  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.206633  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.473449ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.227591  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.421354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.227912  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 12:12:46.246740  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.246791  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.246837  108421 httplog.go:90] GET /healthz: (1.415971ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:46.246752  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.544276ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.257354  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.257391  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.257444  108421 httplog.go:90] GET /healthz: (1.155925ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.265918  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.265940  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.267131  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.031301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.267462  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.267497  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 12:12:46.267767  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.267911  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.268057  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.286565  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.294542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.307465  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.192167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.307745  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 12:12:46.326543  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.325569ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.346561  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.346609  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.346649  108421 httplog.go:90] GET /healthz: (1.14958ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:46.347143  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.94034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.347357  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 12:12:46.356945  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.357110  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.357267  108421 httplog.go:90] GET /healthz: (1.092721ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.366102  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.049495ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.387036  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.964021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.387324  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 12:12:46.391922  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.406408  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.255174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.427614  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.403159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.428004  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 12:12:46.446474  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.446678  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.446788  108421 httplog.go:90] GET /healthz: (1.375772ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:46.446697  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.628122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.457338  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.457371  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.457458  108421 httplog.go:90] GET /healthz: (1.128182ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.467115  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.000324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.467369  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 12:12:46.470874  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.486446  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.25156ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.507312  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.154004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.507648  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 12:12:46.526673  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.520921ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.546373  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.546415  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.546483  108421 httplog.go:90] GET /healthz: (1.119863ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:46.547168  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.957094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.547384  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 12:12:46.557260  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.557447  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.557629  108421 httplog.go:90] GET /healthz: (1.383787ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.566290  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.142084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.587042  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.981812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.587282  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 12:12:46.606337  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.233953ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.626959  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.890717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.627238  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 12:12:46.646889  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.646926  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.746134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:46.646948  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.646994  108421 httplog.go:90] GET /healthz: (1.214037ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:46.657478  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.657519  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.657559  108421 httplog.go:90] GET /healthz: (1.33662ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.667240  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.113477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.667594  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 12:12:46.686628  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.421831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.707110  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.95904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.707356  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 12:12:46.726572  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.371616ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.746819  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.746983  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.747118  108421 httplog.go:90] GET /healthz: (1.576542ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:46.747740  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.521944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.748086  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 12:12:46.757216  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.757388  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.757555  108421 httplog.go:90] GET /healthz: (1.369981ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.766411  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.285709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.787095  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.974481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.787360  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 12:12:46.806686  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.54552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.827269  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.125798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.827578  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 12:12:46.831032  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.831150  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.831276  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.831283  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.834032  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.840033  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.840033  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:46.846454  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.846492  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.846529  108421 httplog.go:90] GET /healthz: (1.17042ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:46.846806  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.610554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.857234  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.857266  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.857305  108421 httplog.go:90] GET /healthz: (1.02976ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.867081  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.92512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.867349  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 12:12:46.886858  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.700031ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.907118  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.985081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.907366  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 12:12:46.926548  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.361729ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.946467  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.946516  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.946559  108421 httplog.go:90] GET /healthz: (1.177386ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:46.947502  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.286777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.947849  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 12:12:46.957325  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:46.957615  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:46.957783  108421 httplog.go:90] GET /healthz: (1.490754ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.966563  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.45283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.987362  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.201363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:46.987808  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 12:12:47.006576  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.441711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.027560  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.389266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.027941  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 12:12:47.046639  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.046680  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.046719  108421 httplog.go:90] GET /healthz: (1.311796ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:47.046767  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.649494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.057276  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.057309  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.057350  108421 httplog.go:90] GET /healthz: (1.090295ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.067106  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.947809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.067383  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 12:12:47.086611  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.447546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.107201  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.04506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.107483  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 12:12:47.126636  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.560241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.146658  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.146700  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.146768  108421 httplog.go:90] GET /healthz: (1.390977ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:47.147276  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.012549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.147543  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 12:12:47.157243  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.157283  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.157317  108421 httplog.go:90] GET /healthz: (1.066326ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.166507  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.406694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.186851  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.777214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.187097  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 12:12:47.188516  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.192402  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.192448  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.192631  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.192450  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.192480  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.206449  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.318401ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.227062  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.900784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.227377  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 12:12:47.246632  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.413701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.246718  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.246905  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.246977  108421 httplog.go:90] GET /healthz: (1.484348ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:47.257273  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.257405  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.257458  108421 httplog.go:90] GET /healthz: (1.203341ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.266206  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.266217  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.267235  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.08921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.267599  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 12:12:47.267640  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.267932  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.268066  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.268219  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.286544  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.355255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.307847  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.640238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.308191  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 12:12:47.326724  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.548835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.346383  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.346670  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.346912  108421 httplog.go:90] GET /healthz: (1.515101ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:47.347030  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.865413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.347325  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 12:12:47.357254  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.357412  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.357656  108421 httplog.go:90] GET /healthz: (1.390348ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.366604  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.347239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.387006  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.934727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.387282  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 12:12:47.392167  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.406889  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.613309ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.427311  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.100203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.427712  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 12:12:47.446403  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.446465  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.446518  108421 httplog.go:90] GET /healthz: (1.13997ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:47.446563  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.412158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.457489  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.457525  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.457574  108421 httplog.go:90] GET /healthz: (1.227154ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.471048  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.473322  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.211299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.473798  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 12:12:47.486391  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.335593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.507214  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.063425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.507539  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 12:12:47.526452  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.314173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.546711  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.546899  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.547049  108421 httplog.go:90] GET /healthz: (1.560776ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:47.547637  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.481439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.548115  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 12:12:47.557563  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.557626  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.557674  108421 httplog.go:90] GET /healthz: (1.343428ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.566466  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.368768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.587034  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.929156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.587263  108421 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 12:12:47.606562  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.366125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.608606  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.364893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.627453  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.267327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.627741  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 12:12:47.646397  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.646707  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.646663  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.371801ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.646909  108421 httplog.go:90] GET /healthz: (1.497393ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:47.649071  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.676711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.657061  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.657095  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.657140  108421 httplog.go:90] GET /healthz: (973.825µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.667310  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.211453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.667647  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 12:12:47.686585  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.368694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.688810  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.577777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.707458  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.286712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.707783  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 12:12:47.726565  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.414316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.728529  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.315113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.746871  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.746904  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.746936  108421 httplog.go:90] GET /healthz: (1.124176ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:47.747263  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.119396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.747583  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 12:12:47.757114  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.757159  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.757207  108421 httplog.go:90] GET /healthz: (1.004891ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.766656  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.421865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.768539  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.35596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.787187  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.034063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.787559  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 12:12:47.806453  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.293056ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.808621  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.369204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.827536  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.395419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.827785  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 12:12:47.831231  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.831341  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.831455  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.831528  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.834186  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.840207  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.840213  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:47.846470  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.846659  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.510228ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:47.846747  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.846878  108421 httplog.go:90] GET /healthz: (1.351885ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:47.848444  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.070352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.857116  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.857149  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.857186  108421 httplog.go:90] GET /healthz: (983.964µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.867049  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.985477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.867323  108421 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 12:12:47.886829  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.635675ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.889192  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.686204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.907488  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.292033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.907797  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 12:12:47.926538  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.325184ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.928328  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.287603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.946382  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.946545  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.946676  108421 httplog.go:90] GET /healthz: (1.257274ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:47.947148  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.0081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.947480  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 12:12:47.957489  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:47.957520  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:47.957562  108421 httplog.go:90] GET /healthz: (1.34377ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.966462  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.394156ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.968801  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.706687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.987201  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.571077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42410]
I0919 12:12:47.988394  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.095862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:47.988778  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 12:12:47.989816  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.273962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42410]
I0919 12:12:47.992047  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.477049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42410]
I0919 12:12:48.006506  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.354128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:48.008730  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.510805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:48.027028  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.855302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:48.027448  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 12:12:48.046166  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:48.046380  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:48.046287  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.154626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:48.046521  108421 httplog.go:90] GET /healthz: (1.139536ms) 0 [Go-http-client/1.1 127.0.0.1:42246]
I0919 12:12:48.048254  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.161373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.057355  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:48.057391  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:48.057470  108421 httplog.go:90] GET /healthz: (1.239962ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.067050  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.903572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.067443  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 12:12:48.086551  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.402513ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.088496  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.268034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.107303  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.145137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.107652  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 12:12:48.126269  108421 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.201878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.128165  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.270423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.146514  108421 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 12:12:48.146544  108421 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 12:12:48.146588  108421 httplog.go:90] GET /healthz: (1.228936ms) 0 [Go-http-client/1.1 127.0.0.1:42258]
I0919 12:12:48.148183  108421 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.902622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.148444  108421 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 12:12:48.158284  108421 httplog.go:90] GET /healthz: (1.75073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.159853  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.104012ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.162055  108421 httplog.go:90] POST /api/v1/namespaces: (1.683291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.163579  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.039287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.167513  108421 httplog.go:90] POST /api/v1/namespaces/default/services: (3.400749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.168847  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (910.518µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.170889  108421 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.620636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.188714  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.192698  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.192698  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.192995  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.193002  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.193025  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.246625  108421 httplog.go:90] GET /healthz: (1.127899ms) 200 [Go-http-client/1.1 127.0.0.1:42246]
W0919 12:12:48.247855  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.247910  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.247941  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.247949  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.247981  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.247994  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.248008  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.248017  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.248032  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.248045  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.248052  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.248098  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:12:48.248287  108421 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 12:12:48.248373  108421 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 12:12:48.248659  108421 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 12:12:48.248959  108421 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0919 12:12:48.248987  108421 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0919 12:12:48.250044  108421 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (686.113µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42246]
I0919 12:12:48.251393  108421 get.go:251] Starting watch for /api/v1/pods, rv=59826 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m40s
I0919 12:12:48.266396  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.266396  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.267813  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.268074  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.268229  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.268335  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.348889  108421 shared_informer.go:227] caches populated
I0919 12:12:48.348925  108421 shared_informer.go:204] Caches are synced for scheduler 
I0919 12:12:48.349201  108421 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349232  108421 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349221  108421 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349252  108421 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349298  108421 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349322  108421 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349349  108421 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349370  108421 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349551  108421 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349566  108421 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349641  108421 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349669  108421 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349720  108421 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.349742  108421 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.350210  108421 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.350237  108421 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.350394  108421 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (500.951µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42262]
I0919 12:12:48.350440  108421 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.350459  108421 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.350503  108421 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.350520  108421 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.350759  108421 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (456.866µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42268]
I0919 12:12:48.350833  108421 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (632.294µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42266]
I0919 12:12:48.351258  108421 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (331.812µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42264]
I0919 12:12:48.351350  108421 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (359.282µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42262]
I0919 12:12:48.351365  108421 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (438.82µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42270]
I0919 12:12:48.351441  108421 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (435.32µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0919 12:12:48.351713  108421 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59826 labels= fields= timeout=5m43s
I0919 12:12:48.352044  108421 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (632.283µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42268]
I0919 12:12:48.352066  108421 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59826 labels= fields= timeout=9m0s
I0919 12:12:48.352046  108421 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (395.247µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42266]
I0919 12:12:48.352129  108421 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59826 labels= fields= timeout=7m56s
I0919 12:12:48.352066  108421 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59826 labels= fields= timeout=5m0s
I0919 12:12:48.352244  108421 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59826 labels= fields= timeout=6m23s
I0919 12:12:48.352376  108421 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59826 labels= fields= timeout=6m40s
I0919 12:12:48.352396  108421 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (2.836431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42258]
I0919 12:12:48.352545  108421 get.go:251] Starting watch for /api/v1/services, rv=59940 labels= fields= timeout=5m28s
I0919 12:12:48.352740  108421 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59826 labels= fields= timeout=9m57s
I0919 12:12:48.352814  108421 get.go:251] Starting watch for /api/v1/nodes, rv=59826 labels= fields= timeout=5m53s
I0919 12:12:48.353082  108421 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59826 labels= fields= timeout=7m58s
I0919 12:12:48.392501  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.441082  108421 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.454939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46416]
I0919 12:12:48.442677  108421 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.109527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46416]
I0919 12:12:48.444105  108421 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.064207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46416]
I0919 12:12:48.449182  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449218  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449225  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449231  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449236  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449242  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449248  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449254  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449260  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449268  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449277  108421 shared_informer.go:227] caches populated
I0919 12:12:48.449338  108421 node_lifecycle_controller.go:327] Sending events to api server.
I0919 12:12:48.449457  108421 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0919 12:12:48.449486  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:12:48.449562  108421 taint_manager.go:162] Sending events to api server.
I0919 12:12:48.449679  108421 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0919 12:12:48.449709  108421 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0919 12:12:48.449723  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 12:12:48.449745  108421 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 12:12:48.449873  108421 node_lifecycle_controller.go:488] Starting node controller
I0919 12:12:48.449907  108421 shared_informer.go:197] Waiting for caches to sync for taint
I0919 12:12:48.452137  108421 httplog.go:90] POST /api/v1/namespaces: (1.89691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42286]
I0919 12:12:48.452458  108421 node_lifecycle_controller.go:327] Sending events to api server.
I0919 12:12:48.452577  108421 node_lifecycle_controller.go:359] Controller is using taint based evictions.
I0919 12:12:48.452730  108421 taint_manager.go:162] Sending events to api server.
I0919 12:12:48.452862  108421 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0919 12:12:48.452960  108421 node_lifecycle_controller.go:465] Controller will taint node by condition.
I0919 12:12:48.453056  108421 node_lifecycle_controller.go:488] Starting node controller
I0919 12:12:48.453164  108421 shared_informer.go:197] Waiting for caches to sync for taint
I0919 12:12:48.453274  108421 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.453295  108421 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.454180  108421 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (600.933µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42286]
I0919 12:12:48.455164  108421 get.go:251] Starting watch for /api/v1/namespaces, rv=59942 labels= fields= timeout=6m43s
I0919 12:12:48.471531  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.553197  108421 shared_informer.go:227] caches populated
I0919 12:12:48.553249  108421 shared_informer.go:227] caches populated
I0919 12:12:48.553483  108421 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.553508  108421 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.553592  108421 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.553607  108421 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.553723  108421 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.553742  108421 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0919 12:12:48.554708  108421 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (567.446µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42288]
I0919 12:12:48.554725  108421 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (584.126µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42294]
I0919 12:12:48.554783  108421 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (645.443µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42290]
I0919 12:12:48.555346  108421 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59826 labels= fields= timeout=7m3s
I0919 12:12:48.555462  108421 get.go:251] Starting watch for /api/v1/pods, rv=59826 labels= fields= timeout=8m45s
I0919 12:12:48.555572  108421 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59826 labels= fields= timeout=6m6s
I0919 12:12:48.605521  108421 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0919 12:12:48.605557  108421 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0919 12:12:48.605581  108421 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0919 12:12:48.605586  108421 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0919 12:12:48.605593  108421 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0919 12:12:48.605597  108421 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0919 12:12:48.605754  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"f0db4e80-29cf-4f90-8420-b4347a8c9e3e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0919 12:12:48.605795  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"b6276d66-de24-4874-99ee-11ff298f8978", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0919 12:12:48.605806  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"7a9e9a1d-9cf8-4d29-98f5-222dc0a9e829", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0919 12:12:48.608340  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (2.33822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:12:48.610601  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.785317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:12:48.612544  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.41324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:12:48.618586  108421 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0919 12:12:48.618616  108421 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0919 12:12:48.618649  108421 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0919 12:12:48.618656  108421 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0919 12:12:48.618671  108421 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0919 12:12:48.618677  108421 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0919 12:12:48.618745  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"b6276d66-de24-4874-99ee-11ff298f8978", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0919 12:12:48.618785  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"f0db4e80-29cf-4f90-8420-b4347a8c9e3e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0919 12:12:48.618800  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"7a9e9a1d-9cf8-4d29-98f5-222dc0a9e829", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0919 12:12:48.620853  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.809049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:12:48.622557  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.321122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:12:48.624618  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.598671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:12:48.650118  108421 shared_informer.go:227] caches populated
I0919 12:12:48.650149  108421 shared_informer.go:204] Caches are synced for taint 
I0919 12:12:48.650193  108421 taint_manager.go:186] Starting NoExecuteTaintManager
I0919 12:12:48.653335  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653362  108421 shared_informer.go:204] Caches are synced for taint 
I0919 12:12:48.653433  108421 taint_manager.go:186] Starting NoExecuteTaintManager
I0919 12:12:48.653462  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653472  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653476  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653480  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653485  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653489  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653494  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653498  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653504  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653508  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653512  108421 shared_informer.go:227] caches populated
I0919 12:12:48.653516  108421 shared_informer.go:227] caches populated
I0919 12:12:48.656237  108421 httplog.go:90] POST /api/v1/nodes: (2.167832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.656725  108421 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0919 12:12:48.656787  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 12:12:48.656800  108421 taint_manager.go:438] Updating known taints on node node-0: []
I0919 12:12:48.656866  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 12:12:48.656872  108421 taint_manager.go:438] Updating known taints on node node-0: []
I0919 12:12:48.658599  108421 httplog.go:90] POST /api/v1/nodes: (1.883829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.658744  108421 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0919 12:12:48.658804  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 12:12:48.658809  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 12:12:48.658825  108421 taint_manager.go:438] Updating known taints on node node-1: []
I0919 12:12:48.658832  108421 taint_manager.go:438] Updating known taints on node node-1: []
I0919 12:12:48.661282  108421 httplog.go:90] POST /api/v1/nodes: (1.978657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.661825  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 12:12:48.661849  108421 taint_manager.go:438] Updating known taints on node node-2: []
I0919 12:12:48.661851  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 12:12:48.661870  108421 taint_manager.go:438] Updating known taints on node node-2: []
I0919 12:12:48.661896  108421 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0919 12:12:48.663981  108421 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/pods: (2.111413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.664365  108421 scheduling_queue.go:830] About to try and schedule pod taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2
I0919 12:12:48.664384  108421 scheduler.go:530] Attempting to schedule pod: taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2
I0919 12:12:48.664408  108421 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c", Name:"testpod-2"}
I0919 12:12:48.664501  108421 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c", Name:"testpod-2"}
I0919 12:12:48.664697  108421 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2", node "node-0"
I0919 12:12:48.664728  108421 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2", node "node-0": all PVCs bound and nothing to do
I0919 12:12:48.664768  108421 factory.go:606] Attempting to bind testpod-2 to node-0
I0919 12:12:48.667083  108421 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/pods/testpod-2/binding: (2.040527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.667307  108421 scheduler.go:662] pod taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2 is bound successfully on node "node-0", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0919 12:12:48.667658  108421 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c", Name:"testpod-2"}
I0919 12:12:48.667729  108421 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c", Name:"testpod-2"}
I0919 12:12:48.670039  108421 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/events: (2.369517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.766351  108421 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/pods/testpod-2: (1.531523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.768385  108421 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/pods/testpod-2: (1.398089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.770144  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.092647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.772626  108421 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.006193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.773625  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (509.02µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.774965  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (902.406µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:48.777865  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.68084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:48.778120  108421 store.go:362] GuaranteedUpdate of /3402e26a-010b-4204-9d3e-bc51f9b0f91b/minions/node-0 failed because of a conflict, going to retry
I0919 12:12:48.778128  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:48.772865673 +0000 UTC m=+364.237088662,}] Taint to Node node-0
I0919 12:12:48.778355  108421 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0919 12:12:48.779177  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.235061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:48.779709  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:48.773310735 +0000 UTC m=+364.237533684,}] Taint to Node node-0
I0919 12:12:48.779746  108421 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0919 12:12:48.831405  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.831514  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.831606  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.831659  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.834397  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.840396  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.840395  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:48.875262  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.841379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:48.975021  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.725053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.075361  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.941834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.175108  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.737809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.188883  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.192938  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.192938  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.193171  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.193180  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.193186  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.266759  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.266776  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.267994  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.268287  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.268387  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.268742  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.275380  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.903569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.351149  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.351501  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.351591  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.352501  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.352635  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.352857  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.375985  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.50523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.392695  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.471725  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.475365  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.863658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.555242  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.575577  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.180976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.675589  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.967537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.775465  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.928303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.831609  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.831681  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.831814  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.831820  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.834773  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.840604  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.840610  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:49.874951  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.600863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:49.975771  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.255229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.075292  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.802587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.175658  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.155421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.189223  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.193126  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.193361  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.193354  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.193369  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.193146  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.267005  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.267005  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.268204  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.268454  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.268615  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.268964  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.275172  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.758762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.351333  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.351666  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.351825  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.352745  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.352781  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.353010  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.375247  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.763192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.392924  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.471943  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.475487  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.944608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.555451  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.575282  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.773585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.675091  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.67201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.775300  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.892653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.831941  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.831941  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.832007  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.831956  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.835013  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.840755  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.841058  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:50.875131  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.647637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:50.975244  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.835625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.075262  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.793352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.175316  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.882332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.189471  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.193660  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.193662  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.193726  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.193851  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.193865  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.267191  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.267205  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.268382  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.268621  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.268787  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.269138  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.275097  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.670164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.351538  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.351833  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.352028  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.352913  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.352922  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.353201  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.375277  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.799028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.393117  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.472184  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.475012  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.614501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.555642  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.575232  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.816637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.675600  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.098109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.714486  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.82166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46416]
I0919 12:12:51.716455  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.354054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46416]
I0919 12:12:51.718328  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.393077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46416]
I0919 12:12:51.775238  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.751802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.832130  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.832154  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.832141  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.832139  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.835205  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.840946  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.841222  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:51.874994  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.639563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:51.975195  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.698934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.075131  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.59633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.175188  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.794409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.189658  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.193857  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.193857  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.193882  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.194008  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.194009  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.267399  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.267399  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.268607  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.268782  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.268946  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.269313  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.275288  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.903942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.351690  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.352024  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.352202  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.353062  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.353239  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.353346  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.375373  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.946283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.393323  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.472339  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.475240  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.832107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.555860  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.575210  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.773589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.675296  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.830412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.775197  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.727396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.832320  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.832327  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.832607  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.832614  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.835351  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.841146  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.841388  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:52.875456  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.864551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:52.975443  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.87989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.066820  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.76407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:12:53.069018  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.528997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:12:53.070833  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.1472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:12:53.074664  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.297469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.175214  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.789592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.189857  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.194168  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.194237  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.194340  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.194360  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.194362  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.267642  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.267640  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.268791  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.268965  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.269074  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.269500  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.275482  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.12649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.351916  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.352224  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.352452  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.353224  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.353478  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.353620  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.375006  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.724158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.393502  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.472457  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.475344  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.824579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.556182  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.575487  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.979542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.650482  108421 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0919 12:12:53.650531  108421 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0919 12:12:53.650619  108421 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0919 12:12:53.650641  108421 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0919 12:12:53.650647  108421 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0919 12:12:53.650662  108421 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0919 12:12:53.650830  108421 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
I0919 12:12:53.650756  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"65500e2a-e4d9-40ff-8850-6d711ebaef28", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0919 12:12:53.650898  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"c5d13ff6-4506-4a2d-bd16-fa0f0f5902d2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0919 12:12:53.650982  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"bdc79dcb-bdb2-4d02-a337-389e5748857c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
W0919 12:12:53.650993  108421 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
I0919 12:12:53.651084  108421 node_lifecycle_controller.go:770] Node node-0 is NotReady as of 2019-09-19 12:12:53.651054473 +0000 UTC m=+369.115277431. Adding it to the Taint queue.
W0919 12:12:53.651126  108421 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0919 12:12:53.651166  108421 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0919 12:12:53.651197  108421 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0919 12:12:53.653549  108421 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0919 12:12:53.653577  108421 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0919 12:12:53.653655  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (2.374226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.653655  108421 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0919 12:12:53.653773  108421 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0919 12:12:53.653782  108421 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0919 12:12:53.653795  108421 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0919 12:12:53.653802  108421 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0919 12:12:53.653832  108421 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
I0919 12:12:53.653871  108421 node_lifecycle_controller.go:770] Node node-0 is NotReady as of 2019-09-19 12:12:53.653856249 +0000 UTC m=+369.118079188. Adding it to the Taint queue.
W0919 12:12:53.653908  108421 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
I0919 12:12:53.653920  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"65500e2a-e4d9-40ff-8850-6d711ebaef28", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0919 12:12:53.653952  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"bdc79dcb-bdb2-4d02-a337-389e5748857c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0919 12:12:53.653965  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"c5d13ff6-4506-4a2d-bd16-fa0f0f5902d2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
W0919 12:12:53.653937  108421 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0919 12:12:53.654033  108421 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0919 12:12:53.655992  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.955287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:53.656020  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.438451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.658215  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.60852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:53.658523  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.895461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.658668  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (613.515µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42304]
I0919 12:12:53.660344  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.391094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.661409  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.929971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42304]
I0919 12:12:53.661739  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (435.149µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0919 12:12:53.661758  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-19 12:12:53.657478835 +0000 UTC m=+369.121701782,}] Taint to Node node-0
I0919 12:12:53.661897  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 12:12:53.661918  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 12:12:53.661904  108421 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 12:12:53.661933  108421 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 12:12:53 +0000 UTC}]
I0919 12:12:53.662052  108421 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2 at 2019-09-19 12:12:53.662038859 +0000 UTC m=+369.126261811 to be fired at 2019-09-19 12:12:53.662038859 +0000 UTC m=+369.126261811
I0919 12:12:53.661920  108421 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 12:12:53 +0000 UTC}]
I0919 12:12:53.662175  108421 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2
I0919 12:12:53.662170  108421 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2 at 2019-09-19 12:12:53.662157159 +0000 UTC m=+369.126380101 to be fired at 2019-09-19 12:12:53.662157159 +0000 UTC m=+369.126380101
I0919 12:12:53.662285  108421 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2
I0919 12:12:53.662482  108421 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2
I0919 12:12:53.662552  108421 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2
I0919 12:12:53.664393  108421 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/events: (1.349409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:53.664476  108421 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/events: (1.517002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:53.664847  108421 store.go:362] GuaranteedUpdate of /3402e26a-010b-4204-9d3e-bc51f9b0f91b/pods/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2 failed because of a conflict, going to retry
I0919 12:12:53.665144  108421 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/pods/testpod-2: (2.608906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42304]
I0919 12:12:53.665156  108421 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/pods/testpod-2: (2.594684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42298]
I0919 12:12:53.666069  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.792835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42310]
I0919 12:12:53.666845  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-19 12:12:53.661127913 +0000 UTC m=+369.125351055,}] Taint to Node node-0
I0919 12:12:53.666944  108421 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 12:12:53.675080  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.691212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:53.775202  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.774994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:53.832515  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.832514  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.832808  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.832823  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.835536  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.841356  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.841657  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:53.875662  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.98328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:53.975096  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.705621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.075463  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.976195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.175230  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.830776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.190271  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.194404  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.194492  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.194507  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.194518  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.194537  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.268058  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.268058  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.269040  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.269116  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.269232  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.269701  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.275323  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.888423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.352493  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.352549  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.352778  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.353501  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.353682  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.354056  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.375100  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.690384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.393613  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.472638  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.475245  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.872608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.556564  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.575378  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.857067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.676091  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.672241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.775333  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.91153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.832774  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.832774  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.832959  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.833018  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.835750  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.843550  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.843603  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:54.875320  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.766188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:54.975458  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.035549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.075645  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.077826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.175333  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.932793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.190540  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.194676  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.194709  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.194709  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.194676  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.194693  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.268207  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.268207  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.269270  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.269271  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.269341  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.269874  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.275304  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.802058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.352697  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.352775  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.352916  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.353692  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.353896  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.354228  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.375340  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.82798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.393823  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.472840  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.475625  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.825051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.556855  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.575266  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.858656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.675125  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.727971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.775278  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.846555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.833051  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.833093  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.833150  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.833071  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.835918  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.843758  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.843759  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:55.875574  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.802274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:55.975127  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.65412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.075484  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.108254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.175523  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.048017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.190724  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.194893  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.194893  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.194910  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.194916  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.195100  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.268454  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.268459  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.269486  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.269512  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.269530  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.270078  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.275270  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.84756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.352947  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.352947  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.353315  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.353971  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.354201  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.354466  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.375773  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.263092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.394002  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.473073  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.475051  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.668913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.557347  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.575284  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.775963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.675191  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.780735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.775483  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.948579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.833300  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.833385  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.833298  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.833454  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.836084  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.844008  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.844008  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:56.875331  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.798829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:56.975078  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.693739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.075215  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.822774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.175298  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.924726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.190957  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.195126  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.195159  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.195167  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.195191  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.195245  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.268651  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.268664  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.269627  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.269668  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.269671  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.270315  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.275327  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.869267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.353177  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.353211  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.353580  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.354213  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.354498  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.354570  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.374883  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.556483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.394240  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.474118  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.475333  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.829276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.557555  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.575578  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.063376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.675250  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.757882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.775298  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.772174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.833522  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.833637  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.833651  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.833671  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.836349  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.844243  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.844243  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:57.875364  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.836916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.975287  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.881503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:57.987911  108421 httplog.go:90] GET /api/v1/namespaces/default: (2.059663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42410]
I0919 12:12:57.990329  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.684912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42410]
I0919 12:12:57.992148  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.172724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42410]
I0919 12:12:58.075433  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.907074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.160718  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.648275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.163187  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.767604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.164804  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.108266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.175183  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.720475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.191127  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.195317  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.195327  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.195359  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.195380  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.195590  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.268852  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.268856  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.269813  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.269830  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.269859  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.270491  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.275498  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.102747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.353398  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.353447  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.353774  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.354412  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.354677  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.354739  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.375403  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.902943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.394448  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.474319  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.475454  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.922228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.557780  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.575604  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.191066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.651484  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000453523s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 12:12:58.651554  108421 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0919 12:12:58.651567  108421 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0919 12:12:58.651576  108421 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0919 12:12:58.654449  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000542595s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 12:12:58.654538  108421 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0919 12:12:58.654554  108421 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0919 12:12:58.654562  108421 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0919 12:12:58.655032  108421 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.824734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.655408  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.004272146s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 12:12:58.655474  108421 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0919 12:12:58.655482  108421 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0919 12:12:58.655488  108421 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0919 12:12:58.656973  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (592.967µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42316]
I0919 12:12:58.656974  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (487.921µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42314]
I0919 12:12:58.657127  108421 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.053327ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
E0919 12:12:58.657382  108421 node_lifecycle_controller.go:1037] Error updating node node-0: Operation cannot be fulfilled on nodes "node-0": the object has been modified; please apply your changes to the latest version and try again
I0919 12:12:58.658632  108421 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.867064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.658928  108421 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0919 12:12:58.658959  108421 controller_utils.go:124] Update ready status of pods on node [node-1]
I0919 12:12:58.658980  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.335791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.659215  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"c5d13ff6-4506-4a2d-bd16-fa0f0f5902d2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0919 12:12:58.660308  108421 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.190668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42316]
I0919 12:12:58.660588  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.009406008s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 12:12:58.660629  108421 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0919 12:12:58.660661  108421 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0919 12:12:58.660678  108421 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0919 12:12:58.661000  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.706212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.661320  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.192078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42314]
I0919 12:12:58.661443  108421 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (406.72µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42316]
I0919 12:12:58.661492  108421 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (426.047µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42322]
I0919 12:12:58.661507  108421 store.go:362] GuaranteedUpdate of /3402e26a-010b-4204-9d3e-bc51f9b0f91b/minions/node-0 failed because of a conflict, going to retry
I0919 12:12:58.661730  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:58.655814248 +0000 UTC m=+374.120037201,}] Taint to Node node-0
I0919 12:12:58.662474  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (471.817µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.662972  108421 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.930615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 12:12:58.663167  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.549897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.663341  108421 controller_utils.go:180] Recording status change NodeNotReady event message for node node-2
I0919 12:12:58.663376  108421 controller_utils.go:124] Update ready status of pods on node [node-2]
I0919 12:12:58.663903  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:58.655802474 +0000 UTC m=+374.120025426,}] Taint to Node node-0
I0919 12:12:58.664020  108421 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"bdc79dcb-bdb2-4d02-a337-389e5748857c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-2 status is now: NodeNotReady
I0919 12:12:58.664174  108421 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (750.231µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 12:12:58.664523  108421 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (461.015µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.664952  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (578.843µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42332]
I0919 12:12:58.665368  108421 store.go:362] GuaranteedUpdate of /3402e26a-010b-4204-9d3e-bc51f9b0f91b/minions/node-1 failed because of a conflict, going to retry
I0919 12:12:58.665779  108421 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.251169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42324]
I0919 12:12:58.666159  108421 httplog.go:90] POST /api/v1/namespaces/default/events: (1.770432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42330]
I0919 12:12:58.666186  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:58.659996662 +0000 UTC m=+374.124219608,}] Taint to Node node-1
I0919 12:12:58.666216  108421 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0919 12:12:58.666326  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.83341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.666715  108421 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-2: (2.37438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42328]
I0919 12:12:58.666960  108421 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0919 12:12:58.667032  108421 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:48 +0000 UTC,}] Taint
I0919 12:12:58.667149  108421 httplog.go:90] PATCH /api/v1/nodes/node-1: (4.492923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42326]
I0919 12:12:58.667481  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:58.660376101 +0000 UTC m=+374.124599047,}] Taint to Node node-1
I0919 12:12:58.667526  108421 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0919 12:12:58.667675  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (455.141µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42306]
I0919 12:12:58.668191  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.986913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42320]
I0919 12:12:58.668531  108421 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:48 +0000 UTC,}] Taint
I0919 12:12:58.669082  108421 store.go:362] GuaranteedUpdate of /3402e26a-010b-4204-9d3e-bc51f9b0f91b/minions/node-2 failed because of a conflict, going to retry
I0919 12:12:58.669235  108421 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.897963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.669533  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:58.663253361 +0000 UTC m=+374.127476285,}] Taint to Node node-2
I0919 12:12:58.669575  108421 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0919 12:12:58.670455  108421 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.338767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42324]
I0919 12:12:58.671305  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 12:12:58.663100187 +0000 UTC m=+374.127323126,}] Taint to Node node-2
I0919 12:12:58.671355  108421 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0919 12:12:58.671859  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.969709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42326]
I0919 12:12:58.672389  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 12:12:58.672400  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 12:12:58.672438  108421 taint_manager.go:438] Updating known taints on node node-0: []
I0919 12:12:58.672446  108421 taint_manager.go:438] Updating known taints on node node-0: []
I0919 12:12:58.672459  108421 taint_manager.go:459] All taints were removed from the Node node-0. Cancelling all evictions...
I0919 12:12:58.672459  108421 taint_manager.go:459] All taints were removed from the Node node-0. Cancelling all evictions...
I0919 12:12:58.672468  108421 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2 at 2019-09-19 12:12:58.672465662 +0000 UTC m=+374.136688601
I0919 12:12:58.672473  108421 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2 at 2019-09-19 12:12:58.672469967 +0000 UTC m=+374.136692913
I0919 12:12:58.674450  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.094095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.679474  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.025617625s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 12:12:58.679526  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.025680481s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:12:58.679539  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.025694775s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:12:58.679554  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.025708892s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:12:58.680535  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (626.227µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.684254  108421 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.490257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.684622  108421 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-19 12:12:58.679603297 +0000 UTC m=+374.143826242,}] Taint to Node node-0
I0919 12:12:58.684717  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 12:12:58.684734  108421 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/unreachable  NoExecute 2019-09-19 12:12:58 +0000 UTC}]
I0919 12:12:58.684779  108421 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2 at 2019-09-19 12:12:58.684768528 +0000 UTC m=+374.148991476 to be fired at 2019-09-19 12:17:58.684768528 +0000 UTC m=+674.148991476
I0919 12:12:58.684796  108421 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 12:12:58.684820  108421 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/unreachable  NoExecute 2019-09-19 12:12:58 +0000 UTC}]
I0919 12:12:58.684854  108421 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions0ee261a8-aacc-43d0-88cb-46538a37863c/testpod-2 at 2019-09-19 12:12:58.684844627 +0000 UTC m=+374.149067576 to be fired at 2019-09-19 12:17:58.684844627 +0000 UTC m=+674.149067576
I0919 12:12:58.685312  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (423.288µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.685702  108421 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 12:12:58.685782  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.031864634s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 12:12:58.685804  108421 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0919 12:12:58.685812  108421 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0919 12:12:58.685817  108421 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0919 12:12:58.687498  108421 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.485405ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
E0919 12:12:58.687747  108421 node_lifecycle_controller.go:1037] Error updating node node-1: Operation cannot be fulfilled on nodes "node-1": the object has been modified; please apply your changes to the latest version and try again
I0919 12:12:58.689009  108421 httplog.go:90] GET /api/v1/nodes/node-1: (1.070197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.709536  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.055605196s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 12:12:58.709603  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.055685553s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:12:58.709646  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.055730359s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:12:58.709662  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.055746849s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:12:58.709737  108421 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-19 12:12:58.709713076 +0000 UTC m=+374.173936027. Adding it to the Taint queue.
I0919 12:12:58.709779  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.055770399s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 12:12:58.709801  108421 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0919 12:12:58.709813  108421 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0919 12:12:58.709823  108421 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0919 12:12:58.712469  108421 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.180857ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
E0919 12:12:58.712863  108421 node_lifecycle_controller.go:1037] Error updating node node-2: Operation cannot be fulfilled on nodes "node-2": the object has been modified; please apply your changes to the latest version and try again
I0919 12:12:58.714599  108421 httplog.go:90] GET /api/v1/nodes/node-2: (1.512357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.735182  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.081160289s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 12:12:58.735249  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.081241248s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:12:58.735266  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.081258163s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:12:58.735277  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.081269716s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:12:58.735377  108421 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-19 12:12:58.735359421 +0000 UTC m=+374.199582363. Adding it to the Taint queue.
I0919 12:12:58.735481  108421 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0919 12:12:58.736765  108421 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (824.136µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.775373  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.888903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.833778  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.833810  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.833819  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.833974  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.836555  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.844737  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.844747  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:58.875549  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.032162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:58.975436  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.944424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.075356  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.847672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.175435  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.955383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.191597  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.195463  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.195463  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.195475  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.195539  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.195776  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.269308  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.269326  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.270145  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.270160  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.270219  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.270675  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.275172  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.814448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.353621  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.353627  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.353952  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.354640  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.354888  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.354960  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.375597  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.098459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.394706  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.474499  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.475384  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.980088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.558052  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.575302  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.814359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.675433  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.869902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.775641  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.223395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.833998  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.834001  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.834027  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.834283  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.836746  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.844922  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.844930  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:12:59.875409  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.918193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:12:59.975364  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.942893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.075470  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.057164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.175739  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.36414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.191790  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.195717  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.195727  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.195717  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.195719  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.195970  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.269512  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.269539  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.270321  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.270329  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.270364  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.270890  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.275281  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.904378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.353833  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.353844  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.354114  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.354851  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.355081  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.355131  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.375436  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.048793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.394924  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.474821  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.475284  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.783776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.558441  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.575449  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.946428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.675278  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.845314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.775250  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.762553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.834352  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.834484  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.834554  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.834685  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.836928  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.845236  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.845249  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:00.875374  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.846471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:00.975249  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.747921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.075118  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.642152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.175344  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.814957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.192004  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.195873  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.195904  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.195902  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.196076  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.196280  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.269853  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.269855  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.270497  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.270517  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.270529  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.271080  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.275010  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.675419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.354205  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.354215  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.354255  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.355130  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.355395  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.355482  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.375860  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.389763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.395137  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.475577  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.037025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.475790  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.558693  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.575568  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.037183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.675300  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.80995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.714578  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.81219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46416]
I0919 12:13:01.716545  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.453214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46416]
I0919 12:13:01.718351  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.226252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46416]
I0919 12:13:01.775484  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.014613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.834556  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.834865  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.834867  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.834879  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.837113  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.845380  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.845456  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:01.874848  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.512962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:01.975221  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.795575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.075447  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.894676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.175263  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.799101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.192277  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.196070  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.196077  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.196100  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.196278  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.196419  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.270059  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.270059  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.270669  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.270685  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.270686  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.271284  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.276168  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.993969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.354380  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.354380  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.354399  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.355314  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.355590  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.355624  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.375256  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.779212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.395343  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.474882  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.495323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.475980  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.558979  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.574819  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.495448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.675095  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.567431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.775123  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.738075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.834751  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.835077  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.835107  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.835319  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.837482  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.845780  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.845780  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:02.875237  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.7585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:02.975339  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.833543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.067018  108421 httplog.go:90] GET /api/v1/namespaces/default: (1.866862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:13:03.069073  108421 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.451293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:13:03.070973  108421 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.29087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0919 12:13:03.075018  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.685586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.175178  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.700426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.192468  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.196310  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.196487  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.196511  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.196535  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.196557  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.270287  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.270296  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.270847  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.270879  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.270888  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.271478  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.275280  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.906519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.354508  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.354589  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.354596  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.355477  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.355787  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.355808  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.375284  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.917234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.395608  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.475470  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.910985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.476139  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.559215  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.575594  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.101721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.672459  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.021426276s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 12:13:03.672534  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.021514805s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.672556  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.021538386s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.672571  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.021553239s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.672641  108421 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-19 12:13:03.672621827 +0000 UTC m=+379.136844778. Adding it to the Taint queue.
I0919 12:13:03.672691  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.021558539s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 12:13:03.672713  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.021580651s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.672741  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.021609232s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.672811  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.021677862s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.672871  108421 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-19 12:13:03.672856986 +0000 UTC m=+379.137079937. Adding it to the Taint queue.
I0919 12:13:03.672900  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.021721579s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 12:13:03.672921  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.02174339s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.672935  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.021757639s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.672993  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.021814744s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.673073  108421 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-19 12:13:03.673012902 +0000 UTC m=+379.137235851. Adding it to the Taint queue.
I0919 12:13:03.675275  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.769739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.737577  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.083653439s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 12:13:03.737657  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.083727643s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.737673  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.083758515s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.737684  108421 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.083770294s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.737741  108421 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-19 12:13:03.737727657 +0000 UTC m=+379.201950583. Adding it to the Taint queue.
I0919 12:13:03.737779  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.083772062s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 12:13:03.737791  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.083783481s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.737805  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.083797756s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.737820  108421 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.083812681s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.737839  108421 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-19 12:13:03.73783292 +0000 UTC m=+379.202055861. Adding it to the Taint queue.
I0919 12:13:03.737857  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.084012767s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 12:13:03.737868  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.084023476s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.737879  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.084034465s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.737889  108421 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.084044866s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 12:12:48 +0000 UTC,LastTransitionTime:2019-09-19 12:12:58 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 12:13:03.737908  108421 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-19 12:13:03.737901357 +0000 UTC m=+379.202124298. Adding it to the Taint queue.
I0919 12:13:03.775492  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.942345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.834983  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.835249  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.835272  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.835508  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.837641  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.845989  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.845989  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:03.875565  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.146767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:03.975619  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.075609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.075352  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.924669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.175371  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.892841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.192651  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.196509  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.196588  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.196615  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.196687  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.196699  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.270501  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.270501  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.271032  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.271036  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.271042  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.271665  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.275517  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.17125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.354700  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.354716  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.354732  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.355699  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.355906  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.355943  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.375287  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.869425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.395836  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.475230  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.760484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.476299  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.559407  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.575225  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.712049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.675335  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.913781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.775236  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.784195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.835210  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.835436  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.835490  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.835671  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.837978  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.846216  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.846216  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:04.875696  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.296481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:04.975491  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.924269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.075850  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.421624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.175346  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.831715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.192897  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.196705  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.196818  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.196835  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.196862  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.198501  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.270760  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.270760  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.271366  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.271367  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.271374  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.271858  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.275666  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.193069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.355034  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.355045  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.355047  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.355908  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.356078  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.356153  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.375206  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.7866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.396030  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.475205  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.775985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.476490  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.559662  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.575503  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.034949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.675272  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.865622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.775333  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.736262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.835462  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.835638  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.835658  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.835798  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.838180  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.846450  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.846454  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:05.875485  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.020873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:05.975328  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.809396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.075345  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.924765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.175202  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.786338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.193112  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.196868  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.196871  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.196973  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.196996  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.198681  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.270965  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.270965  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.271596  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.271643  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.271643  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.271949  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.275443  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.898738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.355237  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.355237  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.355237  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.356120  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.356318  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.356516  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.375387  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.961235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.396234  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.475482  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.074163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.476654  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.559920  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.574940  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.542403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.675184  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.825832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.775285  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.821263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.835671  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.835782  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.835795  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.835939  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.838483  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.846758  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.846769  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:06.875294  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.865378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:06.975705  108421 httplog.go:90] GET /api/v1/nodes/node-0: (2.189412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:07.075222  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.740814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:07.175475  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.993009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42308]
I0919 12:13:07.193552  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.197322  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.197621  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.197626  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.197660  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.199004  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.271188  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.271209  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.271836  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.271849  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.271858  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.272207  108421 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 12:13:07.275189  108421 httplog.go:90] GET /api/v1/nodes/node-0: (1.724889ms)