This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 7 failed / 2861 succeeded
Started2019-09-19 10:07
Elapsed27m13s
Revision
Buildergke-prow-ssd-pool-1a225945-f0q6
Refs master:b8866250
82703:f642bc2f
pod316c886c-dac5-11e9-b559-260d2af1bc04
infra-commitfe9f237a8
pod316c886c-dac5-11e9-b559-260d2af1bc04
repok8s.io/kubernetes
repo-commite9a75e611f9c04d4b4e217c83e20674593d8f0e4
repos{u'k8s.io/kubernetes': u'master:b88662505d288297750becf968bf307dacf872fa,82703:f642bc2feb755cb6f834787163725a498cda4dce'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0919 10:29:10.888215  108479 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 10:29:10.888233  108479 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 10:29:10.888246  108479 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 10:29:10.888257  108479 master.go:259] Using reconciler: 
I0919 10:29:10.890699  108479 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.891068  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.891205  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.891909  108479 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 10:29:10.891942  108479 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.891966  108479 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 10:29:10.892124  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.892137  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.893424  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.893576  108479 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 10:29:10.893608  108479 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.893650  108479 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 10:29:10.893822  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.893883  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.895268  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.895566  108479 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 10:29:10.895580  108479 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 10:29:10.895618  108479 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.895859  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.895888  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.896995  108479 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 10:29:10.897155  108479 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.897393  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.897420  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.897506  108479 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 10:29:10.898216  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.898457  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.899949  108479 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 10:29:10.900016  108479 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 10:29:10.900489  108479 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.900818  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.900849  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.900874  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.902071  108479 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 10:29:10.902130  108479 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 10:29:10.902305  108479 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.902438  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.902454  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.902784  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.904266  108479 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 10:29:10.904345  108479 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 10:29:10.904742  108479 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.905210  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.906163  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.906306  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.907071  108479 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 10:29:10.907214  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.907317  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.907332  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.907390  108479 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 10:29:10.908441  108479 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 10:29:10.908584  108479 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.908696  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.908711  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.908804  108479 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 10:29:10.908806  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.909886  108479 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 10:29:10.910025  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.910189  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.910274  108479 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 10:29:10.910298  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.910450  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.911686  108479 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 10:29:10.911805  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.911824  108479 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 10:29:10.911892  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.911903  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.913384  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.913483  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.914364  108479 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 10:29:10.914544  108479 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.914745  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.914767  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.914856  108479 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 10:29:10.916095  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.916795  108479 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 10:29:10.917053  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.917363  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.917498  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.916852  108479 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 10:29:10.918462  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.919046  108479 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 10:29:10.919082  108479 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 10:29:10.919082  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.919277  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.919295  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.920578  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.921371  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.921396  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.922513  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.922931  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.922957  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.923498  108479 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 10:29:10.923520  108479 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 10:29:10.923634  108479 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 10:29:10.923972  108479 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.924167  108479 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.925127  108479 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.925709  108479 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.926238  108479 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.926964  108479 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.927410  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.927538  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.927615  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.927732  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.928500  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.929574  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.931332  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.931923  108479 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.932399  108479 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.933076  108479 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.933379  108479 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.934074  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.934293  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.934459  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.934661  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.934846  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.934984  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.935138  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.935873  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.936334  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.937203  108479 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.938159  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.938528  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.938803  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.939547  108479 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.939807  108479 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.940713  108479 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.941313  108479 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.941967  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.942832  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.943118  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.943256  108479 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 10:29:10.943277  108479 master.go:461] Enabling API group "authentication.k8s.io".
I0919 10:29:10.943299  108479 master.go:461] Enabling API group "authorization.k8s.io".
I0919 10:29:10.943456  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.943612  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.943641  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.944665  108479 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 10:29:10.944873  108479 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 10:29:10.945414  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.945664  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.945738  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.946542  108479 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 10:29:10.946656  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.946794  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.946818  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.946898  108479 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 10:29:10.947459  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.948973  108479 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 10:29:10.948994  108479 master.go:461] Enabling API group "autoscaling".
I0919 10:29:10.949123  108479 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.949622  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.950006  108479 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 10:29:10.950780  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.950808  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.952529  108479 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 10:29:10.952717  108479 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.952857  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.952877  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.952957  108479 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 10:29:10.953395  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.954769  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.959346  108479 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 10:29:10.959431  108479 master.go:461] Enabling API group "batch".
I0919 10:29:10.959749  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.960018  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.960071  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.960153  108479 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 10:29:10.961464  108479 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 10:29:10.961499  108479 master.go:461] Enabling API group "certificates.k8s.io".
I0919 10:29:10.961664  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.961837  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.961860  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.961936  108479 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 10:29:10.963063  108479 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 10:29:10.963080  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.963143  108479 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 10:29:10.963391  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.963550  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.963571  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.965295  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.965308  108479 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 10:29:10.965330  108479 master.go:461] Enabling API group "coordination.k8s.io".
I0919 10:29:10.965344  108479 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 10:29:10.965553  108479 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 10:29:10.965546  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.965659  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.965675  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.967217  108479 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 10:29:10.967266  108479 master.go:461] Enabling API group "extensions".
I0919 10:29:10.967399  108479 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.967548  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.967564  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.967642  108479 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 10:29:10.967804  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.968015  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.969809  108479 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 10:29:10.969966  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.970077  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.970096  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.970485  108479 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 10:29:10.970506  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.972298  108479 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 10:29:10.972321  108479 master.go:461] Enabling API group "networking.k8s.io".
I0919 10:29:10.972366  108479 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.972483  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.972499  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.972568  108479 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 10:29:10.973124  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.974506  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.974737  108479 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 10:29:10.974754  108479 master.go:461] Enabling API group "node.k8s.io".
I0919 10:29:10.974905  108479 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 10:29:10.974909  108479 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.975047  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.975064  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.976341  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.976638  108479 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 10:29:10.976806  108479 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.976976  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.976998  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.977075  108479 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 10:29:10.977972  108479 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 10:29:10.977995  108479 master.go:461] Enabling API group "policy".
I0919 10:29:10.978027  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.978131  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.978150  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.978322  108479 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 10:29:10.979356  108479 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 10:29:10.979547  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.979705  108479 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 10:29:10.979500  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.980506  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.983367  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.981540  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.984966  108479 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 10:29:10.985014  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.985138  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.985160  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.985197  108479 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 10:29:10.986144  108479 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 10:29:10.986315  108479 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 10:29:10.986920  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.987447  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.987573  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.987799  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:10.989751  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.989821  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:10.989838  108479 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 10:29:10.989883  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:10.989951  108479 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 10:29:10.990013  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:10.990032  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.000063  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.001706  108479 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 10:29:11.002215  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.002447  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.002481  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.002567  108479 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 10:29:11.004360  108479 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 10:29:11.004417  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.004620  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.004647  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.004760  108479 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 10:29:11.005542  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.005997  108479 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 10:29:11.006390  108479 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 10:29:11.007004  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.007341  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.007512  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.012011  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.012048  108479 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 10:29:11.012021  108479 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 10:29:11.012096  108479 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 10:29:11.012715  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.014047  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.014112  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.014234  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.014257  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.016144  108479 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 10:29:11.016228  108479 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 10:29:11.016407  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.016558  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.016587  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.016835  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.017678  108479 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 10:29:11.017707  108479 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 10:29:11.017851  108479 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 10:29:11.017856  108479 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 10:29:11.018039  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.018193  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.018258  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.018936  108479 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 10:29:11.019124  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.019267  108479 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 10:29:11.019440  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.019499  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.019184  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.020410  108479 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 10:29:11.020472  108479 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.020639  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.020666  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.020746  108479 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 10:29:11.021710  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.022118  108479 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 10:29:11.022183  108479 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.023803  108479 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 10:29:11.024317  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.024355  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.025199  108479 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 10:29:11.025590  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.025854  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.025899  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.026060  108479 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 10:29:11.028318  108479 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 10:29:11.028378  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.028503  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.028619  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.028639  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.028736  108479 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 10:29:11.028928  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.030413  108479 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 10:29:11.030437  108479 master.go:461] Enabling API group "storage.k8s.io".
I0919 10:29:11.030581  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.030797  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.030817  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.030888  108479 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 10:29:11.031485  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.032506  108479 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 10:29:11.032653  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.032765  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.032783  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.032851  108479 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 10:29:11.033460  108479 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 10:29:11.033586  108479 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.033701  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.033717  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.033780  108479 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 10:29:11.035016  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.035243  108479 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 10:29:11.035368  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.035418  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.035511  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.035538  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.035647  108479 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 10:29:11.037264  108479 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 10:29:11.037446  108479 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.037502  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.037569  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.037585  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.037659  108479 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 10:29:11.038346  108479 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 10:29:11.038369  108479 master.go:461] Enabling API group "apps".
I0919 10:29:11.038404  108479 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 10:29:11.038403  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.038515  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.038531  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.039079  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.039272  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.039755  108479 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 10:29:11.039784  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.039882  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.039907  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.039966  108479 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 10:29:11.041083  108479 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 10:29:11.041115  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.041282  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.041301  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.041382  108479 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 10:29:11.041434  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.042555  108479 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 10:29:11.042594  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.042730  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.042748  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.042826  108479 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 10:29:11.043134  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.044978  108479 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 10:29:11.044987  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.044996  108479 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 10:29:11.045036  108479 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.045228  108479 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 10:29:11.045311  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.045329  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.046660  108479 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 10:29:11.046698  108479 master.go:461] Enabling API group "events.k8s.io".
I0919 10:29:11.046914  108479 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.047092  108479 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.047416  108479 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.047518  108479 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.047602  108479 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.047684  108479 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.047763  108479 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 10:29:11.047840  108479 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.047947  108479 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.048025  108479 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.048108  108479 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.048201  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.048997  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.049239  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.049985  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.050278  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.050793  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.051065  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.051862  108479 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.052145  108479 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.054649  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.064460  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.065748  108479 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.066460  108479 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:29:11.066675  108479 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 10:29:11.067634  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.067946  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.068502  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.068230  108479 watch_cache.go:405] Replace watchCache (rev: 30567) 
I0919 10:29:11.071934  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.076625  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.078104  108479 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.079377  108479 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.080409  108479 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.081293  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.081594  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.082296  108479 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:29:11.082390  108479 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 10:29:11.083406  108479 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.083713  108479 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.084415  108479 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.085110  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.085597  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.086409  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.087214  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.087925  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.088467  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.089294  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.089993  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:29:11.090076  108479 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 10:29:11.090662  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.091479  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:29:11.091624  108479 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 10:29:11.092323  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.093000  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.093299  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.093896  108479 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.094413  108479 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.094929  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.095488  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:29:11.095585  108479 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 10:29:11.096409  108479 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.097224  108479 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.097522  108479 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.098248  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.098528  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.098817  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.099556  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.099840  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.100105  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.100875  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.101152  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.101437  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:29:11.101509  108479 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 10:29:11.101518  108479 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 10:29:11.102249  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.102899  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.103591  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.104304  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.105143  108479 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2ed81dea-ff52-4e22-8be6-a7e5e7c84907", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:29:11.110911  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.521126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52930]
I0919 10:29:11.114129  108479 httplog.go:90] GET /api/v1/services: (1.234054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52930]
I0919 10:29:11.119286  108479 httplog.go:90] GET /api/v1/services: (2.32705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52930]
I0919 10:29:11.123704  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.123730  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.123741  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.123751  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.123759  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.123781  108479 httplog.go:90] GET /healthz: (164.936µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.127267  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.415467ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52930]
I0919 10:29:11.130389  108479 httplog.go:90] GET /api/v1/services: (1.738535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52930]
I0919 10:29:11.130533  108479 httplog.go:90] GET /api/v1/services: (2.237099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.135085  108479 httplog.go:90] POST /api/v1/namespaces: (7.229624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:29:11.136860  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.409753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.141456  108479 httplog.go:90] POST /api/v1/namespaces: (3.838929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.143159  108479 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.333673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.145377  108479 httplog.go:90] POST /api/v1/namespaces: (1.900104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.146356  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.146382  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.146394  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.146404  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.146565  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.146802  108479 httplog.go:90] GET /healthz: (644.016µs) 0 [Go-http-client/1.1 127.0.0.1:52928]
I0919 10:29:11.224872  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.224906  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.224920  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.224932  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.224956  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.224990  108479 httplog.go:90] GET /healthz: (265.742µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.247872  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.247904  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.247916  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.247925  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.247932  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.247959  108479 httplog.go:90] GET /healthz: (214.868µs) 0 [Go-http-client/1.1 127.0.0.1:52932]
I0919 10:29:11.324914  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.324951  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.324965  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.324975  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.324983  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.325012  108479 httplog.go:90] GET /healthz: (269.504µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.348620  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.348660  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.348672  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.348689  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.348697  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.348725  108479 httplog.go:90] GET /healthz: (261.998µs) 0 [Go-http-client/1.1 127.0.0.1:52932]
I0919 10:29:11.427233  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.427271  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.427284  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.427295  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.427303  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.427337  108479 httplog.go:90] GET /healthz: (285.552µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.447872  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.447907  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.447920  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.447929  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.447938  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.447965  108479 httplog.go:90] GET /healthz: (235.773µs) 0 [Go-http-client/1.1 127.0.0.1:52932]
I0919 10:29:11.524911  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.524944  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.524956  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.524969  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.524976  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.525016  108479 httplog.go:90] GET /healthz: (249.679µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.547902  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.547940  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.547951  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.547959  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.547966  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.548007  108479 httplog.go:90] GET /healthz: (247.267µs) 0 [Go-http-client/1.1 127.0.0.1:52932]
I0919 10:29:11.624885  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.624918  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.624931  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.624941  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.624949  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.624974  108479 httplog.go:90] GET /healthz: (219.019µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.666525  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.666563  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.666574  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.666583  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.666591  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.666634  108479 httplog.go:90] GET /healthz: (259.971µs) 0 [Go-http-client/1.1 127.0.0.1:52932]
I0919 10:29:11.725079  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.725114  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.725127  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.725137  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.725145  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.725192  108479 httplog.go:90] GET /healthz: (259.355µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.747969  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.748007  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.748021  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.748031  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.748040  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.748073  108479 httplog.go:90] GET /healthz: (272.763µs) 0 [Go-http-client/1.1 127.0.0.1:52932]
I0919 10:29:11.824947  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.824988  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.825000  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.825010  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.825019  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.825048  108479 httplog.go:90] GET /healthz: (241.879µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.847969  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:29:11.848010  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.848033  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.848043  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.848053  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.848088  108479 httplog.go:90] GET /healthz: (282.103µs) 0 [Go-http-client/1.1 127.0.0.1:52932]
I0919 10:29:11.889625  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:29:11.889777  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:29:11.925782  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.925812  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.925822  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.925839  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.925878  108479 httplog.go:90] GET /healthz: (1.126507ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:11.950731  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:11.950777  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:11.950788  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:11.950797  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:11.950839  108479 httplog.go:90] GET /healthz: (3.048632ms) 0 [Go-http-client/1.1 127.0.0.1:52932]
I0919 10:29:12.025995  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.026029  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:12.026040  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:12.026049  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:12.026088  108479 httplog.go:90] GET /healthz: (1.242228ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:12.049159  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.049363  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:29:12.049443  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:29:12.049590  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:29:12.049992  108479 httplog.go:90] GET /healthz: (2.027938ms) 0 [Go-http-client/1.1 127.0.0.1:52932]
I0919 10:29:12.110760  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.582542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52932]
I0919 10:29:12.111690  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.945361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.112812  108479 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (3.050572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.113786  108479 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.74068ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.115209  108479 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.038562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.115385  108479 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 10:29:12.116999  108479 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.495893ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.118732  108479 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.358723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.118911  108479 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 10:29:12.118925  108479 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 10:29:12.119895  108479 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (4.752545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.122062  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.026505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.123477  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (858.46µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.125104  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.289694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.126492  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.126514  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.126541  108479 httplog.go:90] GET /healthz: (1.766379ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.126566  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.156818ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.127562  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (704.6µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.128784  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (978.221µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.130023  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (800.269µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.131803  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.351041ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.132765  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (664.869µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.134561  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.2818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.134915  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 10:29:12.135956  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (667.611µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.137796  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.272899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.137954  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 10:29:12.138745  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (654.168µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.140423  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.314775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.140714  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 10:29:12.141618  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (711.932µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.143561  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.36073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.143782  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 10:29:12.144764  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (748.159µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.146405  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.253886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.146719  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 10:29:12.147589  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (675.989µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.148347  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.148371  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.148401  108479 httplog.go:90] GET /healthz: (740.029µs) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:12.149719  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.81821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.150015  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 10:29:12.151027  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (702.856µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.153271  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.758954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.153565  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 10:29:12.154705  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (817.007µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.156603  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.156854  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 10:29:12.157755  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (729.16µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.159841  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.676327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.160098  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 10:29:12.161003  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (747.737µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.163405  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.019044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.163808  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 10:29:12.164878  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (796.027µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.166736  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.460009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.166893  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 10:29:12.168039  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (964.246µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.170503  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.999087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.170784  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 10:29:12.171850  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (777.278µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.173950  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.53208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.174230  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 10:29:12.175526  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.117563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.177410  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.177755  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 10:29:12.179009  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.027408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.180972  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.438636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.181224  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 10:29:12.182155  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (752.802µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.184123  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.184467  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 10:29:12.185534  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (786.895µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.187286  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.403755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.187598  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 10:29:12.188899  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.120142ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.191134  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.749735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.191461  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 10:29:12.192608  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (824.213µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.194089  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.068826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.194720  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 10:29:12.195760  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (807.64µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.197838  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.576702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.198192  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 10:29:12.199331  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (916.729µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.201727  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.791201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.201960  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 10:29:12.203547  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.28838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.205418  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.477941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.205618  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 10:29:12.206835  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.031903ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.210961  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.103354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.211442  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 10:29:12.212826  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (901.008µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.215140  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.725721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.215412  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 10:29:12.216683  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.038859ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.218940  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.636889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.219254  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 10:29:12.220487  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.038608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.223479  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.086138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.224572  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 10:29:12.225390  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.225417  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.225482  108479 httplog.go:90] GET /healthz: (783.883µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.227162  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (2.235493ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.229806  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.825913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.230481  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 10:29:12.231593  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (889.292µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.234356  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.967555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.234721  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 10:29:12.235960  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (870.009µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.237883  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.419217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.238356  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 10:29:12.239738  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.075011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.241907  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.591368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.242236  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 10:29:12.243594  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.088539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.245904  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.650762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.246253  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 10:29:12.247605  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.108783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.249987  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.250034  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.250069  108479 httplog.go:90] GET /healthz: (2.254251ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:12.250565  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.449864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.250932  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 10:29:12.252000  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (821.254µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.254588  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.086154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.254871  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 10:29:12.256110  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (969.734µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.258084  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.512742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.258581  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 10:29:12.259792  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (892.669µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.261845  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.587203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.262165  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 10:29:12.263434  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (895.114µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.265336  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.265634  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 10:29:12.266876  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (998.915µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.268883  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.269261  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 10:29:12.270295  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (823.637µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.272492  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.716756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.272754  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 10:29:12.273804  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (856.987µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.275685  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.421058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.275980  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 10:29:12.277058  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (884.753µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.278820  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.300935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.279097  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 10:29:12.280273  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (930.787µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.282316  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.604143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.282627  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 10:29:12.283877  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.003435ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.285933  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.652339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.286285  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 10:29:12.287708  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (985.849µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.289611  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.437093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.290037  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 10:29:12.291387  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (969.166µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.293718  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.848747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.293996  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 10:29:12.294966  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (741.435µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.296846  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.409213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.297107  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 10:29:12.298518  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.228694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.300570  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.73476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.300836  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 10:29:12.302036  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (948.618µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.304141  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.60364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.304394  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 10:29:12.305406  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (801.668µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.307320  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.307562  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 10:29:12.308651  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (867.517µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.310930  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.836049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.311357  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 10:29:12.312792  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.195526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.314803  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.53577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.315369  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 10:29:12.316430  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (834.818µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.325412  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.325503  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.325657  108479 httplog.go:90] GET /healthz: (998.157µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.330933  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.672832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.331272  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 10:29:12.348955  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.348985  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.349025  108479 httplog.go:90] GET /healthz: (1.243996ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:12.350536  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.368554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.371803  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.42313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.372270  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 10:29:12.390709  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.34092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.411535  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.177134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.412245  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 10:29:12.425914  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.425956  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.425993  108479 httplog.go:90] GET /healthz: (1.240696ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.430587  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.291566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.449004  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.449361  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.449763  108479 httplog.go:90] GET /healthz: (1.854209ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:12.451517  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.349913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.451779  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 10:29:12.470765  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.433908ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.491955  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.577184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.492348  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 10:29:12.510821  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.496597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.526070  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.526106  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.526144  108479 httplog.go:90] GET /healthz: (1.393534ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.531464  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.128683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.531704  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 10:29:12.548972  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.549034  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.549077  108479 httplog.go:90] GET /healthz: (1.318581ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:12.550413  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.019674ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.571245  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.891101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.571569  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 10:29:12.591135  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.380517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.611738  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.268419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.612087  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 10:29:12.626040  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.626304  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.626452  108479 httplog.go:90] GET /healthz: (1.638766ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.630858  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.401995ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.649146  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.649327  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.649503  108479 httplog.go:90] GET /healthz: (1.694421ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:12.651590  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.343349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.651993  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 10:29:12.674969  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.430474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.691486  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.147878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.691998  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 10:29:12.711023  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.549513ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.725708  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.725748  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.725787  108479 httplog.go:90] GET /healthz: (1.055361ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.731276  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.990247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.731522  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 10:29:12.748834  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.748866  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.748924  108479 httplog.go:90] GET /healthz: (1.15347ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:12.750613  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.13832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.771625  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.252313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.771883  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 10:29:12.790700  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.383627ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.811987  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.24417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.812346  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 10:29:12.826122  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.826156  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.826216  108479 httplog.go:90] GET /healthz: (1.266551ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.830591  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.132596ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.849107  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.849139  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.849446  108479 httplog.go:90] GET /healthz: (1.370301ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:12.851504  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.070487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.851773  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 10:29:12.870798  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.425986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.891668  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.354084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.891914  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 10:29:12.911956  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.232996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.926004  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.926041  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.926083  108479 httplog.go:90] GET /healthz: (1.302992ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.931688  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.350717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:12.932289  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 10:29:12.949054  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:12.949106  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:12.949149  108479 httplog.go:90] GET /healthz: (1.338551ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:12.953105  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (3.737393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.971625  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.274087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:12.971889  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 10:29:12.990587  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.312153ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.011873  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.494266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.012197  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 10:29:13.029434  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.029466  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.029503  108479 httplog.go:90] GET /healthz: (1.332534ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.040384  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.632646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.048848  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.048881  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.048919  108479 httplog.go:90] GET /healthz: (1.167014ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:13.051684  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.144242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.052450  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 10:29:13.070645  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.294749ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.091920  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.497729ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.092230  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 10:29:13.110929  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.532304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.126061  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.126094  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.126216  108479 httplog.go:90] GET /healthz: (1.357723ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.131341  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.09201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.131588  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 10:29:13.148891  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.148929  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.148969  108479 httplog.go:90] GET /healthz: (1.184212ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:13.150545  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.096623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.171667  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.28239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.171923  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 10:29:13.190664  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.353566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.212510  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.481635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.212792  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 10:29:13.225924  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.225954  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.225998  108479 httplog.go:90] GET /healthz: (1.25817ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.230602  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.279893ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.248892  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.248923  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.248978  108479 httplog.go:90] GET /healthz: (1.126697ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:13.251877  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.224544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.252132  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 10:29:13.270846  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.460628ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.291667  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.319474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.292004  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 10:29:13.323936  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.603222ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.325843  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.325867  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.325899  108479 httplog.go:90] GET /healthz: (1.021251ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.331282  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.015759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.331934  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 10:29:13.349563  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.349623  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.349695  108479 httplog.go:90] GET /healthz: (1.727063ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:13.351621  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.704547ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.373367  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.757408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.373665  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 10:29:13.391631  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.687299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.412578  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.124491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.413212  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 10:29:13.426830  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.426887  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.426949  108479 httplog.go:90] GET /healthz: (2.05368ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.431853  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (2.488593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.450638  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.450714  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.450786  108479 httplog.go:90] GET /healthz: (2.695197ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:13.454234  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.574191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.455726  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 10:29:13.471764  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (2.201393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.493768  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.115766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.494565  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 10:29:13.518149  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.278177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.526716  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.526782  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.526852  108479 httplog.go:90] GET /healthz: (1.928195ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.532677  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.246193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.533666  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 10:29:13.550821  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.550892  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.550985  108479 httplog.go:90] GET /healthz: (2.990957ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:13.552470  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.327562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.572505  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.919107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.572865  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 10:29:13.591673  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (2.180287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.613437  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.885591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.613843  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 10:29:13.627269  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.627606  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.628071  108479 httplog.go:90] GET /healthz: (3.123003ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.632295  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.621496ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.650366  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.650408  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.650465  108479 httplog.go:90] GET /healthz: (2.580517ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:13.652976  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.536981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.653467  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 10:29:13.672238  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (2.682797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.692980  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.487836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.693365  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 10:29:13.711695  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.049982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.726819  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.726868  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.726930  108479 httplog.go:90] GET /healthz: (1.946777ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.734466  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.058581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.734865  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 10:29:13.750245  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.750296  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.750362  108479 httplog.go:90] GET /healthz: (2.319716ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:13.751543  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (2.039258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.775886  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.320004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.776261  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 10:29:13.792144  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (2.514415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.813861  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.178274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.814509  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 10:29:13.826994  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.827054  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.827120  108479 httplog.go:90] GET /healthz: (2.074221ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.831797  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (2.361978ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.848950  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.848984  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.849022  108479 httplog.go:90] GET /healthz: (1.222076ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:13.853037  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.857316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.853411  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 10:29:13.870633  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.305788ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.872650  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.601549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.894136  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.505517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.894595  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 10:29:13.912938  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.629027ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.916261  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.45853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.928888  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.928933  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.929005  108479 httplog.go:90] GET /healthz: (2.093148ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.932293  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.835187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:13.932546  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 10:29:13.949746  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:13.949800  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:13.949866  108479 httplog.go:90] GET /healthz: (1.999007ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:13.951499  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.864306ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.955959  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.887331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.974279  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.579643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.974630  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 10:29:13.990688  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.361724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:13.993514  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.390207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.016672  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (6.996793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.017118  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 10:29:14.027986  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:14.028564  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:14.028675  108479 httplog.go:90] GET /healthz: (3.726521ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.032494  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (2.845936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.036289  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.694123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.050522  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:14.050584  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:14.050652  108479 httplog.go:90] GET /healthz: (2.647078ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:14.053857  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.809384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.054365  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 10:29:14.072052  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.437269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.076262  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.316531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.093840  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.329993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.094229  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 10:29:14.111737  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.284457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.115261  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.625723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.126845  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:14.126927  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:14.126983  108479 httplog.go:90] GET /healthz: (1.937268ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.133050  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.540825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.133610  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 10:29:14.151929  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (2.368198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.153058  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:14.153114  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:14.153205  108479 httplog.go:90] GET /healthz: (5.229411ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:14.154988  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.22212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.173563  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.887256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.173907  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 10:29:14.192524  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.709569ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.199240  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.217924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.214825  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.04414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.215485  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 10:29:14.228284  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:14.228351  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:14.228428  108479 httplog.go:90] GET /healthz: (3.342937ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.231636  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (2.140883ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.235209  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.539865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.249270  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:14.249307  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:14.249371  108479 httplog.go:90] GET /healthz: (1.527694ms) 0 [Go-http-client/1.1 127.0.0.1:52936]
I0919 10:29:14.251659  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.17672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.251897  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 10:29:14.272291  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (2.272792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.276524  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.899382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.295103  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.359095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.295566  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 10:29:14.311771  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.118199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.314541  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.841029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.325819  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:14.325854  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:14.325889  108479 httplog.go:90] GET /healthz: (1.077252ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.331822  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.430633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.332406  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 10:29:14.348938  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:29:14.348971  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:29:14.349015  108479 httplog.go:90] GET /healthz: (1.241838ms) 0 [Go-http-client/1.1 127.0.0.1:52950]
I0919 10:29:14.350665  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.333643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.353265  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.966489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.373865  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.224428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.374281  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 10:29:14.390800  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.424395ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.393201  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.681577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.413629  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.810925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.414516  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 10:29:14.427671  108479 httplog.go:90] GET /healthz: (1.923638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.430160  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.682835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.448838  108479 httplog.go:90] POST /api/v1/namespaces: (17.466251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.451572  108479 httplog.go:90] GET /healthz: (2.601476ms) 200 [Go-http-client/1.1 127.0.0.1:52950]
W0919 10:29:14.452605  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452684  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452711  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452772  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452785  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452796  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452804  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452820  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452829  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452839  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:29:14.452888  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:29:14.452903  108479 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 10:29:14.452911  108479 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 10:29:14.453093  108479 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 10:29:14.453352  108479 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 10:29:14.453369  108479 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 10:29:14.454498  108479 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (854.076µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:14.456516  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (4.999566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.457233  108479 get.go:251] Starting watch for /api/v1/pods, rv=30567 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=8m55s
I0919 10:29:14.462358  108479 httplog.go:90] POST /api/v1/namespaces/default/services: (5.092334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.464203  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.125939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.466441  108479 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.85463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.553354  108479 shared_informer.go:227] caches populated
I0919 10:29:14.553403  108479 shared_informer.go:204] Caches are synced for scheduler 
I0919 10:29:14.553889  108479 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.553948  108479 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.554154  108479 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.554221  108479 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.554754  108479 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.554788  108479 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.554875  108479 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.554902  108479 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.554912  108479 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.554938  108479 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.555364  108479 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.555389  108479 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.555521  108479 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.555556  108479 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.557025  108479 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (793.305µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:14.557052  108479 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (607.113µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52970]
I0919 10:29:14.557240  108479 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (796.157µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52974]
I0919 10:29:14.557959  108479 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (778.749µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52964]
I0919 10:29:14.558389  108479 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30567 labels= fields= timeout=7m25s
I0919 10:29:14.558771  108479 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (651.097µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52966]
I0919 10:29:14.558989  108479 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30567 labels= fields= timeout=7m28s
I0919 10:29:14.559307  108479 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30567 labels= fields= timeout=8m10s
I0919 10:29:14.559695  108479 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30567 labels= fields= timeout=6m18s
I0919 10:29:14.559819  108479 get.go:251] Starting watch for /api/v1/services, rv=30683 labels= fields= timeout=9m14s
I0919 10:29:14.560243  108479 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (2.862166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52968]
I0919 10:29:14.560362  108479 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.560388  108479 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.560442  108479 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (684.891µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52972]
I0919 10:29:14.560705  108479 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.560726  108479 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.561340  108479 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30567 labels= fields= timeout=9m53s
I0919 10:29:14.561741  108479 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (765.999µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52968]
I0919 10:29:14.562734  108479 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30567 labels= fields= timeout=7m9s
I0919 10:29:14.562750  108479 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30567 labels= fields= timeout=7m15s
I0919 10:29:14.562899  108479 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (728.28µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52976]
I0919 10:29:14.564639  108479 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.564691  108479 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 10:29:14.564693  108479 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30567 labels= fields= timeout=6m53s
I0919 10:29:14.565951  108479 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (926.373µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52980]
I0919 10:29:14.569361  108479 get.go:251] Starting watch for /api/v1/nodes, rv=30567 labels= fields= timeout=8m50s
I0919 10:29:14.658220  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658279  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658298  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658315  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658329  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658340  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658359  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658371  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658387  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658401  108479 shared_informer.go:227] caches populated
I0919 10:29:14.658424  108479 shared_informer.go:227] caches populated
I0919 10:29:14.663566  108479 httplog.go:90] POST /api/v1/nodes: (3.400981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:14.663661  108479 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0919 10:29:14.667051  108479 httplog.go:90] PUT /api/v1/nodes/testnode/status: (2.709834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:14.670263  108479 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods: (2.505912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:14.670859  108479 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pidpressure-fake-name
I0919 10:29:14.670881  108479 scheduler.go:530] Attempting to schedule pod: node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pidpressure-fake-name
I0919 10:29:14.671092  108479 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pidpressure-fake-name", node "testnode"
I0919 10:29:14.671119  108479 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0919 10:29:14.671281  108479 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0919 10:29:14.673726  108479 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name/binding: (2.100611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:14.674014  108479 scheduler.go:662] pod node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0919 10:29:14.676062  108479 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/events: (1.493758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:14.773035  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.940525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:14.873622  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.513466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:14.973586  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.073158  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.0527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.173599  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.457686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.272662  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.692986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.372990  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.935019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.473009  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.955916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.558433  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:15.559573  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:15.561028  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:15.562398  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:15.563642  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:15.568847  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:15.572684  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.70183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.672946  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.859903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.772933  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.939464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.872499  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.517196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:15.973847  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.817209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.072631  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.622784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.172582  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.56251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.274244  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.998489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.372715  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.702866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.472804  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.767194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.558618  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:16.559700  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:16.561236  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:16.562542  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:16.563809  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:16.569036  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:16.572924  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.978295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.672847  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.84831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.772752  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.733332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.872754  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.736338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:16.972694  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.722238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.072750  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.762255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.172872  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.908414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.272808  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.855345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.373239  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.222067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.472792  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.719575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.558797  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:17.559887  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:17.561394  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:17.562697  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:17.563976  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:17.569265  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:17.573031  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.976703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.672917  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.866884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.772908  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.810208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.872661  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.684713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:17.972841  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.80104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.072615  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.678909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.172702  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.710968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.273093  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.056151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.372900  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.864454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.472929  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.947347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.558970  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:18.560073  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:18.561555  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:18.562843  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:18.564109  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:18.569393  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:18.572750  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.756377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.672955  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.870057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.772923  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.922325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.872539  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.519964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:18.973329  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.298041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.072646  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.715587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.172832  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.829347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.273032  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.942647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.373123  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.978676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.472758  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.782575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.559434  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:19.560459  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:19.561715  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:19.563072  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:19.564347  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:19.569567  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:19.572707  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.723658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.672807  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.787102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.772926  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.950885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.872769  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.787739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:19.972875  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.832128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.072901  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.882343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.172852  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.903591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.272805  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.781292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.372973  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.92524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.472739  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.711396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.559564  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:20.560645  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:20.561853  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:20.563247  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:20.564486  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:20.569755  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:20.572861  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.86265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.672747  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.711979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.773215  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.145038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.872871  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.853493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:20.972568  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.587988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.072648  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.660457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.172622  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.668586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.272958  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.849895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.373019  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.948989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.472678  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.733372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.559722  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:21.560866  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:21.562048  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:21.563405  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:21.564660  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:21.570135  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:21.573264  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.322638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.672786  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.753836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.773194  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.135514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.872979  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.974672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:21.973652  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.913928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.072710  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.654878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.172909  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.915584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.272844  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.793918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.372942  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.934421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.472899  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.871236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.559868  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:22.561066  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:22.562973  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:22.563740  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:22.564825  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:22.570333  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:22.572857  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.855324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.673367  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.4062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.773961  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.976418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.872512  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.500589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:22.973392  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.333381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.073140  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.137371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.172617  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.56162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.272644  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.671487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.372471  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.556901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.472780  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.773873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.559977  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:23.561389  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:23.563212  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:23.563903  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:23.564981  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:23.570510  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:23.572708  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.61732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.672487  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.336783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.772775  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.431391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.872707  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.79128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:23.974062  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (3.11534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.072289  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.351353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.172835  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.854457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.272438  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.499977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.372449  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.483446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.430105  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.510692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.432295  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.578506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.434243  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.292305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.472352  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.394049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.560111  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:24.561570  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:24.563365  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:24.564092  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:24.565124  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:24.570863  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:24.573076  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.837656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.672471  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.512705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.775410  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.725345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.872831  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.750323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:24.972489  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.50643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.073011  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.039853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.172958  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.000564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.272736  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.778715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.372925  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.83299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.473280  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.245627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.560305  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:25.561774  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:25.563530  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:25.564284  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:25.565329  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:25.571023  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:25.573001  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.976545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.672685  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.629473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.772650  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.649906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.873494  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.796609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:25.972775  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.772927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.072761  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.76164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.172877  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.858664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.272879  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.922557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.372919  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.900224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.472647  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.647693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.562253  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:26.562370  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:26.563750  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:26.564461  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:26.565490  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:26.571212  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:26.572766  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.799093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.672838  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.761562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.774135  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.727584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.872690  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.698427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:26.978283  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (3.653033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.072599  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.620292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.172396  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.411706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.272189  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.271836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.372528  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.50458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.473458  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.403843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.562488  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:27.562538  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:27.563932  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:27.564706  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:27.565663  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:27.571406  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:27.572637  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.710504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.672889  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.859308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.772744  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.717378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.872654  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.637619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:27.972658  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.653028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.072691  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.750026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.172786  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.791812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.272798  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.753955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.372871  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.910206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.472705  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.727763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.562674  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:28.562688  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:28.564142  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:28.564904  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:28.565865  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:28.571590  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:28.572897  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.941529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.672737  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.761097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.772743  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.774913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.879514  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (5.103646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:28.972809  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.803725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.072481  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.544863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.179045  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (5.740779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.275113  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.562975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.373168  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.169255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.473155  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.434104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.562822  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:29.563215  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:29.564264  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:29.565059  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:29.566095  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:29.571852  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:29.572570  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.593342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.672630  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.615229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.772563  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.562904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.872431  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.396509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:29.972477  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.554807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.072507  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.474399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.172408  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.450245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.272420  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.179202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.372677  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.689784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.472574  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.574383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.563041  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:30.563381  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:30.564399  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:30.565300  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:30.566320  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:30.572037  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:30.573059  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.131986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.672782  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.735789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.772711  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.69314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.872894  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.870156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:30.973207  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.246314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.072544  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.538878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.172573  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.576939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.272525  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.60718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.373467  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.374138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.472741  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.776386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.563281  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:31.563537  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:31.564535  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:31.565415  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:31.566506  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:31.572287  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:31.572439  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.430338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.672306  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.310886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.772998  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.016389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.872346  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.439024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:31.972547  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.527731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.072450  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.448502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.172656  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.665358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.272391  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.422188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.372654  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.702167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.472568  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.598648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.563472  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:32.563711  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:32.564692  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:32.565592  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:32.566657  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:32.572559  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.595326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.572968  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:32.672634  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.612071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.772576  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.603969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.872946  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.993241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:32.977923  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (6.924466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.072577  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.647346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.172559  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.557395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.272497  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.554988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.373852  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.738825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.472671  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.613734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.563876  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:33.563911  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:33.564860  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:33.565818  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:33.566846  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:33.572855  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.794387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.573126  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:33.672673  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.755325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.772629  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.618578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.872744  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.743564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:33.973824  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.540589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.073010  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.00711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.172598  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.664595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.272574  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.636318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.372519  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.54982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.429990  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.338618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.431629  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.193997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.433071  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.075872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.472621  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.64257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.563977  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:34.564054  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:34.565023  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:34.565978  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:34.566949  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:34.573203  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.228301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.573549  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:34.672656  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.655186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.772527  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.56873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.872542  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.589195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:34.972726  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.695377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.074325  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (3.275659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.172689  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.780485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.272831  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.372783  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.763745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.472752  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.771425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.564114  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:35.564262  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:35.565220  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:35.566082  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:35.567112  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:35.573021  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.02161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.573729  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:35.672658  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.686694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.772630  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.634454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.872852  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.856247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:35.973115  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.094703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.072660  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.681228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.174133  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.760654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.272879  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.942469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.372676  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.658768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.472576  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.629793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.564384  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:36.564404  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:36.565402  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:36.566257  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:36.567302  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:36.572716  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.715263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.573920  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:36.673090  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.005309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.772622  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.57323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.872647  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.648707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:36.972804  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.745421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.072620  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.603302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.172653  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.256884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.273162  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.249614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.372763  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.710804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.473001  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.954968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.564562  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:37.564624  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:37.565583  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:37.566457  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:37.567493  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:37.572649  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.702009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.574108  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:37.678160  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (5.603886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.772890  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.766856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.872380  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.488218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:37.972923  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.892805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.072721  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.681012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.173077  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.066959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.272943  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.950282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.372806  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.693795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.473293  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.801991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.564750  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:38.564795  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:38.565737  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:38.566605  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:38.567649  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:38.572627  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.663837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.574319  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:38.672749  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.730984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.772578  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.575179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.874505  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (3.511982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:38.973428  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.812739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.072487  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.467216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.172780  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.786957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.272458  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.540934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.372534  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.571141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.472541  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.529692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.564936  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:39.564945  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:39.565910  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:39.566757  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:39.567970  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:39.572539  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.564324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.574523  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:39.672605  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.623225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.772920  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.949167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.873028  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.988454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:39.973152  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.981491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.076818  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (5.451002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.172575  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.580905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.273027  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.072985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.372847  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.445865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.472746  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.744104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.565139  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:40.565322  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:40.566090  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:40.566902  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:40.568148  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:40.572559  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.558248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.574742  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:40.673932  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.948293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.772696  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.569728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.872604  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.638879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:40.972463  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.47369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.072508  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.566532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.172593  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.488413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.272813  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.772503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.372782  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.755756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.472794  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.783071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.565439  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:41.565558  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:41.566264  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:41.567067  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:41.568367  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:41.573390  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.368235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.574951  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:41.672709  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.657122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.772929  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.881687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.872735  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.655074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:41.973505  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.431015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.072916  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.882958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.172677  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.733211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.273289  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.650396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.374653  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.516531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.472622  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.626426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.565675  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:42.565809  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:42.566434  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:42.567311  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:42.568497  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:42.572970  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.011814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.575141  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:42.672385  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.409558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.773464  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.580955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.872887  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.877858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:42.972347  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.402448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.072667  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.614963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.177338  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (6.248597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.272115  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.222245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.372555  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.560529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.472776  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.853518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.565862  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:43.565934  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:43.566598  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:43.567483  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:43.568659  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:43.573093  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (2.159647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.575320  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:43.672363  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.384731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.772264  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.248891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.872565  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.537135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:43.973095  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.635608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.072479  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.463125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.172766  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.579535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.272676  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.65434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.374958  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (3.953652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.430508  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.685347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.438013  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (7.038497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.439944  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.243823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.474387  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.76468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.566066  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:44.566088  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:44.566777  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:44.567639  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:44.568833  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:44.572610  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.645698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.575873  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:29:44.672842  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.820081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.674661  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (1.445589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.682067  108479 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (6.733435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.684436  108479 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pods/pidpressure-fake-name: (883.162µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
E0919 10:29:44.685193  108479 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0919 10:29:44.685631  108479 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30567&timeout=6m18s&timeoutSeconds=378&watch=true: (30.126271373s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52974]
I0919 10:29:44.685696  108479 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30567&timeout=8m10s&timeoutSeconds=490&watch=true: (30.126755939s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52970]
I0919 10:29:44.685633  108479 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30567&timeout=9m53s&timeoutSeconds=593&watch=true: (30.124689429s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52972]
I0919 10:29:44.685801  108479 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30567&timeout=7m28s&timeoutSeconds=448&watch=true: (30.127187322s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52964]
I0919 10:29:44.685819  108479 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30683&timeout=9m14s&timeoutSeconds=554&watch=true: (30.126372498s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52966]
I0919 10:29:44.685887  108479 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30567&timeout=7m15s&timeoutSeconds=435&watch=true: (30.123500869s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52978]
I0919 10:29:44.685952  108479 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30567&timeout=7m9s&timeoutSeconds=429&watch=true: (30.123609941s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52968]
I0919 10:29:44.686050  108479 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30567&timeout=8m50s&timeoutSeconds=530&watch=true: (30.117282947s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52980]
I0919 10:29:44.686053  108479 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30567&timeout=6m53s&timeoutSeconds=413&watch=true: (30.121776843s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52976]
I0919 10:29:44.686130  108479 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30567&timeoutSeconds=535&watch=true: (30.22957106s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0919 10:29:44.687113  108479 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30567&timeout=7m25s&timeoutSeconds=445&watch=true: (30.129123013s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52936]
I0919 10:29:44.691242  108479 httplog.go:90] DELETE /api/v1/nodes: (4.988033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.691535  108479 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 10:29:44.695548  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.11298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
I0919 10:29:44.697904  108479 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.859765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52986]
--- FAIL: TestNodePIDPressure (33.81s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-102151.xml

Find node-pid-pressure6b710ffd-f59d-43f7-bb3d-1bb68bfa5cb5/pidpressure-fake-name mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.07s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W0919 10:31:22.712883  108479 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 10:31:22.712923  108479 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 10:31:22.712936  108479 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 10:31:22.712947  108479 master.go:259] Using reconciler: 
I0919 10:31:22.714453  108479 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.714683  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.714718  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.715493  108479 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 10:31:22.715557  108479 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 10:31:22.715627  108479 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.715883  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.715901  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.716698  108479 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 10:31:22.716742  108479 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.716788  108479 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 10:31:22.716886  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.716907  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.717806  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.717880  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.721453  108479 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 10:31:22.721489  108479 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 10:31:22.721488  108479 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.739705  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.739751  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.739957  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.740819  108479 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 10:31:22.740894  108479 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 10:31:22.741463  108479 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.741645  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.741675  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.741944  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.742484  108479 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 10:31:22.742580  108479 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 10:31:22.742778  108479 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.742901  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.742932  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.743434  108479 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 10:31:22.743493  108479 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 10:31:22.743558  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.743595  108479 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.743741  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.743771  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.744460  108479 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 10:31:22.744499  108479 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 10:31:22.744608  108479 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.744725  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.744745  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.744824  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.745491  108479 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 10:31:22.745533  108479 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 10:31:22.745609  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.745720  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.745739  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.746308  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.747353  108479 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 10:31:22.747471  108479 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 10:31:22.747515  108479 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.747628  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.747701  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.748428  108479 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 10:31:22.748459  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.748470  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.748580  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.748617  108479 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 10:31:22.748700  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.748726  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.749486  108479 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 10:31:22.749593  108479 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 10:31:22.749658  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.749840  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.749864  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.749944  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.750482  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.751124  108479 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 10:31:22.751225  108479 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 10:31:22.751350  108479 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.751466  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.751484  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.752457  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.752766  108479 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 10:31:22.752850  108479 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 10:31:22.752951  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.753661  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.753694  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.753716  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.754447  108479 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 10:31:22.754489  108479 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 10:31:22.754533  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.754689  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.754719  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.755543  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.755571  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.755617  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.756709  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.756824  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.756844  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.757466  108479 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 10:31:22.757494  108479 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 10:31:22.757539  108479 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 10:31:22.757934  108479 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.758077  108479 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.758632  108479 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.758669  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.759375  108479 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.760072  108479 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.760560  108479 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.761056  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.761245  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.761467  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.761827  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.762400  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.762743  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.763422  108479 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.763759  108479 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.764259  108479 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.764604  108479 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.765114  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.765381  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.765569  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.765786  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.765954  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.766124  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.766628  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.767359  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.767599  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.768530  108479 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.769299  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.769694  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.770123  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.770826  108479 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.771252  108479 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.772002  108479 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.772772  108479 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.773554  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.774087  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.774394  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.774626  108479 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 10:31:22.774697  108479 master.go:461] Enabling API group "authentication.k8s.io".
I0919 10:31:22.774739  108479 master.go:461] Enabling API group "authorization.k8s.io".
I0919 10:31:22.774910  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.775240  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.775332  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.776483  108479 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 10:31:22.776581  108479 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 10:31:22.776749  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.776929  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.776963  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.777800  108479 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 10:31:22.777904  108479 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 10:31:22.778017  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.778396  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.778556  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.778786  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.779057  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.779851  108479 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 10:31:22.779878  108479 master.go:461] Enabling API group "autoscaling".
I0919 10:31:22.779886  108479 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 10:31:22.780037  108479 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.780192  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.780214  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.780909  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.781142  108479 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 10:31:22.781267  108479 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 10:31:22.781339  108479 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.781655  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.781675  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.782414  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.782569  108479 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 10:31:22.782591  108479 master.go:461] Enabling API group "batch".
I0919 10:31:22.782610  108479 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 10:31:22.782948  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.783198  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.783231  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.783378  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.783911  108479 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 10:31:22.783939  108479 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 10:31:22.783942  108479 master.go:461] Enabling API group "certificates.k8s.io".
I0919 10:31:22.784128  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.784327  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.784353  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.785290  108479 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 10:31:22.785361  108479 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 10:31:22.785684  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.786059  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.786341  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.786464  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.786815  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.787546  108479 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 10:31:22.787567  108479 master.go:461] Enabling API group "coordination.k8s.io".
I0919 10:31:22.787577  108479 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 10:31:22.787660  108479 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 10:31:22.787761  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.787950  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.787971  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.788757  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.788803  108479 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 10:31:22.788782  108479 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 10:31:22.788843  108479 master.go:461] Enabling API group "extensions".
I0919 10:31:22.789072  108479 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.789323  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.789349  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.790877  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.791237  108479 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 10:31:22.791307  108479 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 10:31:22.791629  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.791805  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.791876  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.792765  108479 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 10:31:22.792885  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.792897  108479 master.go:461] Enabling API group "networking.k8s.io".
I0919 10:31:22.793153  108479 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.792864  108479 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 10:31:22.793532  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.793573  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.794478  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.794761  108479 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 10:31:22.794968  108479 master.go:461] Enabling API group "node.k8s.io".
I0919 10:31:22.795137  108479 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 10:31:22.795296  108479 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.795571  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.795644  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.796157  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.796409  108479 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 10:31:22.796527  108479 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 10:31:22.796575  108479 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.796702  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.796721  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.797079  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.798428  108479 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 10:31:22.798455  108479 master.go:461] Enabling API group "policy".
I0919 10:31:22.798514  108479 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 10:31:22.798545  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.798854  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.798882  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.799642  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.799747  108479 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 10:31:22.799873  108479 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 10:31:22.799923  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.800115  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.800132  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.800750  108479 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 10:31:22.800792  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.800879  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.800895  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.800897  108479 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 10:31:22.801892  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.802121  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.802425  108479 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 10:31:22.802581  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.803326  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.803353  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.802692  108479 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 10:31:22.804044  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.804129  108479 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 10:31:22.804101  108479 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 10:31:22.804366  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.804816  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.805006  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.805098  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.805781  108479 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 10:31:22.805854  108479 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 10:31:22.805909  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.805994  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.806007  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.806698  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.807079  108479 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 10:31:22.807164  108479 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 10:31:22.807204  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.807625  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.807693  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.807866  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.808438  108479 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 10:31:22.808561  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.808643  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.808655  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.808703  108479 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 10:31:22.809729  108479 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 10:31:22.809766  108479 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 10:31:22.810111  108479 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 10:31:22.810528  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.811053  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.811207  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.811321  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.811336  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.811954  108479 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 10:31:22.811992  108479 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 10:31:22.812071  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.812241  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.812267  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.812834  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.813167  108479 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 10:31:22.813248  108479 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 10:31:22.813255  108479 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 10:31:22.813352  108479 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 10:31:22.813575  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.813817  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.813845  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.814143  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.814376  108479 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 10:31:22.814418  108479 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 10:31:22.814477  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.814563  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.814581  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.815314  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.815433  108479 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 10:31:22.815461  108479 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.815472  108479 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 10:31:22.815563  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.815600  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.816112  108479 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 10:31:22.816135  108479 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.816190  108479 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 10:31:22.816254  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.816265  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.816916  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.817389  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.819605  108479 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 10:31:22.819736  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.819855  108479 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 10:31:22.819899  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.820053  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.822188  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.822963  108479 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 10:31:22.823056  108479 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 10:31:22.823119  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.823377  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.823420  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.823926  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.824097  108479 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 10:31:22.824118  108479 master.go:461] Enabling API group "storage.k8s.io".
I0919 10:31:22.824156  108479 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 10:31:22.824290  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.824421  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.824455  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.825141  108479 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 10:31:22.825456  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.825930  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.826053  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.825254  108479 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 10:31:22.826786  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.826869  108479 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 10:31:22.826889  108479 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 10:31:22.827088  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.827965  108479 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.828221  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.828533  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.828102  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.829130  108479 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 10:31:22.829263  108479 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 10:31:22.829357  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.829511  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.829583  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.830030  108479 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 10:31:22.830161  108479 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.830246  108479 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 10:31:22.830333  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.830563  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.830626  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.831436  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.831659  108479 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 10:31:22.831677  108479 master.go:461] Enabling API group "apps".
I0919 10:31:22.831698  108479 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 10:31:22.831706  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.831792  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.831809  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.832467  108479 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 10:31:22.832525  108479 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 10:31:22.832526  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.832644  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.832680  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.832862  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.833418  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.833556  108479 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 10:31:22.833592  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.833694  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.833721  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.833726  108479 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 10:31:22.834450  108479 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 10:31:22.834541  108479 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 10:31:22.834577  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.834694  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.834969  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.835026  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.835779  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.836495  108479 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 10:31:22.836521  108479 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 10:31:22.836561  108479 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 10:31:22.836549  108479 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.836924  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:22.836952  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:22.837130  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.837767  108479 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 10:31:22.837792  108479 master.go:461] Enabling API group "events.k8s.io".
I0919 10:31:22.837970  108479 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 10:31:22.838007  108479 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.838274  108479 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.838529  108479 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.838640  108479 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.838732  108479 watch_cache.go:405] Replace watchCache (rev: 47710) 
I0919 10:31:22.838788  108479 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.838905  108479 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.839262  108479 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.839435  108479 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.839628  108479 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.839795  108479 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.840598  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.840905  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.841798  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.842121  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.842778  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.843061  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.843699  108479 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.843985  108479 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.844634  108479 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.844922  108479 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:31:22.844977  108479 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 10:31:22.845512  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.845704  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.845952  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.846626  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.847318  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.847973  108479 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.848265  108479 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.848912  108479 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.849523  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.849850  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.850402  108479 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:31:22.850470  108479 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 10:31:22.851118  108479 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.851425  108479 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.851906  108479 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.852482  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.852893  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.853462  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.853997  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.854501  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.854963  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.855534  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.856134  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:31:22.856222  108479 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 10:31:22.856809  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.857365  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:31:22.857423  108479 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 10:31:22.858012  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.858498  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.858777  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.859253  108479 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.859671  108479 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.860189  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.860678  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:31:22.860735  108479 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 10:31:22.861328  108479 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.861873  108479 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.862153  108479 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.863203  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.863456  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.863698  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.864234  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.864480  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.864722  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.865323  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.865614  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.865871  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:31:22.865930  108479 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 10:31:22.865938  108479 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 10:31:22.866489  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.866995  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.867543  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.868017  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.868656  108479 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"fa06a258-27da-4ba3-a3ab-73da038bfdb0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:31:22.871359  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:22.871387  108479 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 10:31:22.871398  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:22.871408  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:22.871416  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:22.871424  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:22.871472  108479 httplog.go:90] GET /healthz: (292.024µs) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:22.872616  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.444803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55702]
I0919 10:31:22.874829  108479 httplog.go:90] GET /api/v1/services: (1.046059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55702]
I0919 10:31:22.878073  108479 httplog.go:90] GET /api/v1/services: (784.686µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55702]
I0919 10:31:22.880239  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:22.880284  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:22.880296  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:22.880321  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:22.880330  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:22.880389  108479 httplog.go:90] GET /healthz: (231.899µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55702]
I0919 10:31:22.881310  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.020155ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:22.881989  108479 httplog.go:90] GET /api/v1/services: (848.254µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55702]
I0919 10:31:22.882640  108479 httplog.go:90] GET /api/v1/services: (1.019256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:22.883779  108479 httplog.go:90] POST /api/v1/namespaces: (2.046573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55704]
I0919 10:31:22.885278  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (980.494µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:22.887212  108479 httplog.go:90] POST /api/v1/namespaces: (1.417073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:22.888339  108479 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (772.672µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:22.890353  108479 httplog.go:90] POST /api/v1/namespaces: (1.581874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:22.972698  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:22.972967  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:22.973049  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:22.973089  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:22.973130  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:22.973349  108479 httplog.go:90] GET /healthz: (828.035µs) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:22.981411  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:22.981452  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:22.981465  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:22.981475  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:22.981483  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:22.981566  108479 httplog.go:90] GET /healthz: (300.192µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.072673  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.072699  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.072707  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.072713  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.072718  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.072761  108479 httplog.go:90] GET /healthz: (213.278µs) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:23.081407  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.081432  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.081441  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.081447  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.081454  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.081483  108479 httplog.go:90] GET /healthz: (227.745µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.172610  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.172654  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.172667  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.172703  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.172711  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.172740  108479 httplog.go:90] GET /healthz: (291.42µs) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:23.181526  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.181852  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.181952  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.182044  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.182114  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.182332  108479 httplog.go:90] GET /healthz: (939.329µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.272816  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.272854  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.272865  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.272872  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.272877  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.272915  108479 httplog.go:90] GET /healthz: (313.129µs) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:23.281772  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.281830  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.281841  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.281847  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.281853  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.282071  108479 httplog.go:90] GET /healthz: (545.336µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.372661  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.372707  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.372720  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.372730  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.372737  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.372798  108479 httplog.go:90] GET /healthz: (285.488µs) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:23.389236  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.389274  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.389288  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.389297  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.389306  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.389378  108479 httplog.go:90] GET /healthz: (375.268µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.472616  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.472659  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.472668  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.472675  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.472680  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.472716  108479 httplog.go:90] GET /healthz: (289.591µs) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:23.481364  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.481401  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.481411  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.481417  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.481422  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.481502  108479 httplog.go:90] GET /healthz: (253.563µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.572549  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.572590  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.572600  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.572607  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.572612  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.572667  108479 httplog.go:90] GET /healthz: (239.68µs) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:23.581475  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.581509  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.581519  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.581525  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.581546  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.581602  108479 httplog.go:90] GET /healthz: (267.208µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.672576  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.672635  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.672648  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.672657  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.672665  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.672698  108479 httplog.go:90] GET /healthz: (279.737µs) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:23.681623  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:31:23.681688  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.681712  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.681722  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.681730  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.681776  108479 httplog.go:90] GET /healthz: (329.68µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.712728  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:31:23.712843  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:31:23.773462  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.773493  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.773500  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.773506  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.773566  108479 httplog.go:90] GET /healthz: (1.213291ms) 0 [Go-http-client/1.1 127.0.0.1:55700]
I0919 10:31:23.782269  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.782295  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.782302  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.782310  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.782384  108479 httplog.go:90] GET /healthz: (1.079947ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.873005  108479 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.327803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56008]
I0919 10:31:23.873057  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.699853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.873010  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.65227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55702]
I0919 10:31:23.873869  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.873901  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:31:23.873912  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:31:23.873920  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:31:23.873950  108479 httplog.go:90] GET /healthz: (904.138µs) 0 [Go-http-client/1.1 127.0.0.1:56010]
I0919 10:31:23.874319  108479 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (923.893µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.874358  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (960.512µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56008]
I0919 10:31:23.875394  108479 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.742618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.875756  108479 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 10:31:23.876376  108479 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.637978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.876887  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.126409ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.877150  108479 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.202255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.878599  108479 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.126048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.878693  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.008874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55700]
I0919 10:31:23.878822  108479 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 10:31:23.878844  108479 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 10:31:23.879915  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (854.949µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.881264  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (821.997µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.881793  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.881928  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:23.882081  108479 httplog.go:90] GET /healthz: (889.156µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.883059  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.314339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.884213  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (613.501µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.885617  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (929.18µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.886699  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (630.468µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.888686  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.664253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.888880  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 10:31:23.890084  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (955.044µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.891893  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.453945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.892289  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 10:31:23.893300  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (834.157µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.894985  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.418928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.895374  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 10:31:23.896705  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.155464ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.899066  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.789667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.899402  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 10:31:23.900775  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.153061ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.903064  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.710704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.903352  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 10:31:23.904424  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (847.227µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.906238  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.401178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.906441  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 10:31:23.907558  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (886.705µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.910222  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.22786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.910480  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 10:31:23.911967  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.186839ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.914609  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.18047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.914817  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 10:31:23.916306  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.182027ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.923685  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.641317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.924123  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 10:31:23.925621  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.115203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.928126  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.886011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.929278  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 10:31:23.930279  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (776.467µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.932165  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.331849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.932396  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 10:31:23.933070  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (571.929µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.935090  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.621791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.935532  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 10:31:23.936509  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (790.441µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.938598  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.75722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.938860  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 10:31:23.940394  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.320407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.942279  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.942586  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 10:31:23.943601  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (844.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.945342  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.373529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.945619  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 10:31:23.946523  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (744.197µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.948396  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.470262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.948584  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 10:31:23.949761  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (900.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.951502  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.242124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.951665  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 10:31:23.952964  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (942.893µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.956075  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.288649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.956372  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 10:31:23.957467  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (851.758µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.959140  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.213251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.959315  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 10:31:23.960375  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (847.313µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.962427  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.618407ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.962670  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 10:31:23.964968  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (801.039µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.967345  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.866857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.967579  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 10:31:23.968702  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (872.029µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.970556  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.224761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.970816  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 10:31:23.971892  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (871.372µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.973058  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.973089  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:23.973126  108479 httplog.go:90] GET /healthz: (772.025µs) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:23.974024  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.634005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.974288  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 10:31:23.975297  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (759.958µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.977571  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.730037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.977886  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 10:31:23.978778  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (698.018µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.980360  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.166205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.980627  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 10:31:23.981582  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (749.408µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.981780  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:23.981867  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:23.982044  108479 httplog.go:90] GET /healthz: (909.81µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:23.983833  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.491869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.984079  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 10:31:23.985166  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (840.691µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.987246  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.419376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.987427  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 10:31:23.988992  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.393411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.990946  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.481629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.991204  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 10:31:23.992146  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (753.539µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.993774  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.274029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.993936  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 10:31:23.994830  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (710.967µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.996803  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.537659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.997033  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 10:31:23.997911  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (707.237µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:23.999898  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.581863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.000124  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 10:31:24.001285  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (922.937µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.003122  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.003481  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 10:31:24.004684  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (918.891µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.006774  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.498659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.006999  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 10:31:24.008101  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (837.874µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.010359  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.757506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.010623  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 10:31:24.011823  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (843.479µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.013864  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.543868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.014215  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 10:31:24.015426  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (964.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.017100  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.368147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.019032  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 10:31:24.020896  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.567647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.023161  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.76987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.023435  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 10:31:24.025782  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (2.201036ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.028782  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.615485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.029155  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 10:31:24.029936  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (564.027µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.031613  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.031783  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 10:31:24.032770  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (843.243µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.034666  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.326179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.035030  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 10:31:24.036139  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (905.272µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.038470  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.830171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.038779  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 10:31:24.039811  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (862.025µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.041566  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.113312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.041818  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 10:31:24.042817  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (735.592µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.045001  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.659711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.045563  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 10:31:24.046452  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (727.51µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.048298  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.460203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.048494  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 10:31:24.049577  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (877.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.051427  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.051748  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 10:31:24.052815  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (880.306µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.054921  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.361912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.055104  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 10:31:24.075442  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (20.021574ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.075514  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.075537  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.075567  108479 httplog.go:90] GET /healthz: (2.982798ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:24.077998  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.946844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.078401  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 10:31:24.079702  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (983.887µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.084262  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.084299  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.084334  108479 httplog.go:90] GET /healthz: (3.017678ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.085365  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.219586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.085507  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 10:31:24.086448  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (764.137µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.088292  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.495011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.088442  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 10:31:24.089459  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (899.465µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.091085  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.329966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.091388  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 10:31:24.092446  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (779.498µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.093849  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.143364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.094221  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 10:31:24.113062  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.614073ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.133648  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.992577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.134234  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 10:31:24.152981  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.414093ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.173090  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.173381  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.173602  108479 httplog.go:90] GET /healthz: (1.246109ms) 0 [Go-http-client/1.1 127.0.0.1:56010]
I0919 10:31:24.174392  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.524248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.174642  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 10:31:24.182078  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.182322  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.182565  108479 httplog.go:90] GET /healthz: (1.314896ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.192931  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.457603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.217488  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.946011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.217864  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 10:31:24.232809  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.251027ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.253662  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.065716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.254014  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 10:31:24.272960  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.452336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.273514  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.273544  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.273584  108479 httplog.go:90] GET /healthz: (1.286248ms) 0 [Go-http-client/1.1 127.0.0.1:56010]
I0919 10:31:24.282340  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.282372  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.282414  108479 httplog.go:90] GET /healthz: (1.089781ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.293499  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.074859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.293927  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 10:31:24.313016  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.480687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.333526  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.978789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.333819  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 10:31:24.352896  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.376006ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.373488  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.373523  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.373571  108479 httplog.go:90] GET /healthz: (1.196454ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:24.374066  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.612505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.374422  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 10:31:24.382037  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.382195  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.382338  108479 httplog.go:90] GET /healthz: (1.106313ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.392363  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (988.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.413582  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.085064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.414090  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 10:31:24.438549  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.122585ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.453616  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.098226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.453958  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 10:31:24.472661  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.213407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.473238  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.473379  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.473428  108479 httplog.go:90] GET /healthz: (1.03095ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:24.481820  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.481851  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.481879  108479 httplog.go:90] GET /healthz: (687.567µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.493624  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.099116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.493879  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 10:31:24.513104  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.38661ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.534016  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.418361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.534260  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 10:31:24.553028  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.420695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.573848  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.374175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.573962  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.573988  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.574021  108479 httplog.go:90] GET /healthz: (1.070467ms) 0 [Go-http-client/1.1 127.0.0.1:56010]
I0919 10:31:24.574389  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 10:31:24.582746  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.582790  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.582828  108479 httplog.go:90] GET /healthz: (1.491657ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.592857  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.424617ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.613836  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.288417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.614078  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 10:31:24.632806  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.289287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.654053  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.517296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.654392  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 10:31:24.673104  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.567757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.673693  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.673724  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.673761  108479 httplog.go:90] GET /healthz: (1.160418ms) 0 [Go-http-client/1.1 127.0.0.1:56010]
I0919 10:31:24.682254  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.682286  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.682325  108479 httplog.go:90] GET /healthz: (1.059374ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.694128  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.60169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.694707  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 10:31:24.712628  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.153637ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.733577  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.094982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.733833  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 10:31:24.753217  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.773287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.774540  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.774574  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.774615  108479 httplog.go:90] GET /healthz: (2.264138ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:24.774935  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.397373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.775269  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 10:31:24.782070  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.782096  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.782125  108479 httplog.go:90] GET /healthz: (866.883µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.792793  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.325924ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.814055  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.450475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.814281  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 10:31:24.832975  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.452749ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.853860  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.400047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.854138  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 10:31:24.872906  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.467579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.873740  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.873768  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.873804  108479 httplog.go:90] GET /healthz: (1.444731ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:24.882577  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.882618  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.882668  108479 httplog.go:90] GET /healthz: (1.389772ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.893473  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.992869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.893751  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 10:31:24.912752  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.273607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.933833  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.346009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.934107  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 10:31:24.953266  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.714654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.973752  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.2665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:24.975044  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.975079  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.975124  108479 httplog.go:90] GET /healthz: (1.294372ms) 0 [Go-http-client/1.1 127.0.0.1:56010]
I0919 10:31:24.975215  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 10:31:24.982461  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:24.982493  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:24.983243  108479 httplog.go:90] GET /healthz: (1.134514ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:24.992831  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.420045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.013775  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.253119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.013989  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 10:31:25.032636  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.092881ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.053615  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.144783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.053837  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 10:31:25.073328  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.073359  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.073393  108479 httplog.go:90] GET /healthz: (976.851µs) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:25.075314  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (3.707208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.084595  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.084638  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.084719  108479 httplog.go:90] GET /healthz: (3.372004ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.093119  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.683631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.093342  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 10:31:25.113113  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.601069ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.133418  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.961554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.133861  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 10:31:25.153236  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.716946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.173482  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.173549  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.173586  108479 httplog.go:90] GET /healthz: (1.269069ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:25.174148  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.689208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.174388  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 10:31:25.182510  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.182859  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.183415  108479 httplog.go:90] GET /healthz: (1.971179ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.192892  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.402886ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.213886  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.173541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.214433  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 10:31:25.232617  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.164393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.255331  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.853067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.255597  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 10:31:25.273340  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.273377  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.273408  108479 httplog.go:90] GET /healthz: (990.399µs) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:25.273643  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.086278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.282194  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.282222  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.282258  108479 httplog.go:90] GET /healthz: (978.658µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.293758  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.29093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.294156  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 10:31:25.312956  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.447694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.333636  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.157102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.333827  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 10:31:25.352592  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.075377ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.373566  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.373602  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.373614  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.20013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.373656  108479 httplog.go:90] GET /healthz: (1.307496ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:25.373933  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 10:31:25.382390  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.382420  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.382456  108479 httplog.go:90] GET /healthz: (1.153122ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.392602  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.164284ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.414260  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.738976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.414517  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 10:31:25.432392  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.017811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.453407  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.939084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.453631  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 10:31:25.473111  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.473310  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.473432  108479 httplog.go:90] GET /healthz: (1.110512ms) 0 [Go-http-client/1.1 127.0.0.1:56010]
I0919 10:31:25.473589  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.127917ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.482556  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.482688  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.482816  108479 httplog.go:90] GET /healthz: (1.510843ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.493547  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.078402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.493795  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 10:31:25.512826  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.21392ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.533777  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.28383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.534349  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 10:31:25.553194  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.703146ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.573324  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.573455  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.069501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:25.573479  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.573638  108479 httplog.go:90] GET /healthz: (1.328648ms) 0 [Go-http-client/1.1 127.0.0.1:56010]
I0919 10:31:25.573860  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 10:31:25.582262  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.582291  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.582333  108479 httplog.go:90] GET /healthz: (1.033041ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.593065  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.607065ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.613459  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.925184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.613871  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 10:31:25.632920  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.506084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.634594  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.131464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.653822  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.325885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.654241  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 10:31:25.673557  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.673591  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.673626  108479 httplog.go:90] GET /healthz: (1.209845ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:25.675041  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (3.588895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.677863  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.203942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.682150  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.682266  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.682429  108479 httplog.go:90] GET /healthz: (1.199297ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.693464  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.028776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.693807  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 10:31:25.712978  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.454835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.715147  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.389832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.734031  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.500213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.734579  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 10:31:25.752918  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.399925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.754833  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.445498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.773427  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.773465  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.773499  108479 httplog.go:90] GET /healthz: (1.21058ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:25.774017  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.50085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.774390  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 10:31:25.782601  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.782636  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.782682  108479 httplog.go:90] GET /healthz: (1.281713ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.793070  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.596148ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.795121  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.493748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.814006  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.480854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.814352  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 10:31:25.832723  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.271034ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.834346  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.195258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.853358  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.906419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.853666  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 10:31:25.873327  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.811676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.873617  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.873655  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.873683  108479 httplog.go:90] GET /healthz: (1.298463ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:25.875458  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.637335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.882110  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.882142  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.882270  108479 httplog.go:90] GET /healthz: (1.004673ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.894045  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.539062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.894418  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 10:31:25.913234  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.721165ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.915245  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.381394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.934260  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.74953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.934534  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 10:31:25.953731  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.190859ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.956094  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.623445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.973132  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.973165  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.973250  108479 httplog.go:90] GET /healthz: (846.479µs) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:25.973686  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.081088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.973865  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 10:31:25.982487  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:25.982523  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:25.982560  108479 httplog.go:90] GET /healthz: (1.183558ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.992960  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.509462ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:25.994868  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.399574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.014760  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.10441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.015012  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 10:31:26.032767  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.229004ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.035332  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.749154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.054640  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.002893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.054988  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 10:31:26.073483  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.952721ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.073778  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:26.073800  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:26.073834  108479 httplog.go:90] GET /healthz: (1.22749ms) 0 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:26.075852  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.798304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.085831  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:31:26.085915  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:31:26.085955  108479 httplog.go:90] GET /healthz: (1.610314ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.093159  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.707553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.093595  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 10:31:26.113063  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.44738ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.115041  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.288665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.133466  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.986041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.133759  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 10:31:26.152919  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.42315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.154488  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.188832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.173446  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.987477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.173714  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 10:31:26.174080  108479 httplog.go:90] GET /healthz: (1.703415ms) 200 [Go-http-client/1.1 127.0.0.1:56012]
I0919 10:31:26.177645  108479 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.303867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
W0919 10:31:26.178055  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178109  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178125  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178161  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178192  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178310  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178401  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178487  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178563  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178644  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:31:26.178763  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:31:26.180191  108479 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.120346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.181617  108479 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 10:31:26.181663  108479 factory.go:321] Registering predicate: PredicateOne
I0919 10:31:26.181671  108479 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 10:31:26.181677  108479 factory.go:321] Registering predicate: PredicateTwo
I0919 10:31:26.181680  108479 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 10:31:26.181685  108479 factory.go:336] Registering priority: PriorityOne
I0919 10:31:26.181690  108479 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 10:31:26.181701  108479 factory.go:336] Registering priority: PriorityTwo
I0919 10:31:26.181704  108479 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 10:31:26.181710  108479 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 10:31:26.182082  108479 httplog.go:90] GET /healthz: (808.633µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.184133  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.639484ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.184359  108479 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.326392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
W0919 10:31:26.184536  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:31:26.185915  108479 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (994.574µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.186032  108479 httplog.go:90] POST /api/v1/namespaces: (1.545587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.186385  108479 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 10:31:26.186421  108479 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 10:31:26.186433  108479 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 10:31:26.186439  108479 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 10:31:26.187943  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.237717ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.188358  108479 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.512046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
W0919 10:31:26.189428  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:31:26.191480  108479 httplog.go:90] POST /api/v1/namespaces/default/services: (2.89363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.192815  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (902.874µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
I0919 10:31:26.192857  108479 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (1.002708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.193501  108479 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 10:31:26.193524  108479 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 10:31:26.194014  108479 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (679.122µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56010]
E0919 10:31:26.194307  108479 controller.go:224] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0919 10:31:26.195010  108479 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.136501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
W0919 10:31:26.195242  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:31:26.196363  108479 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (782.237µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.196693  108479 factory.go:304] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I0919 10:31:26.196720  108479 factory.go:321] Registering predicate: PredicateOne
I0919 10:31:26.196727  108479 plugins.go:288] Predicate type PredicateOne already registered, reusing.
I0919 10:31:26.196732  108479 factory.go:321] Registering predicate: PredicateTwo
I0919 10:31:26.196735  108479 plugins.go:288] Predicate type PredicateTwo already registered, reusing.
I0919 10:31:26.196740  108479 factory.go:336] Registering priority: PriorityOne
I0919 10:31:26.196746  108479 plugins.go:399] Priority type PriorityOne already registered, reusing.
I0919 10:31:26.196763  108479 factory.go:336] Registering priority: PriorityTwo
I0919 10:31:26.196768  108479 plugins.go:399] Priority type PriorityTwo already registered, reusing.
I0919 10:31:26.196775  108479 factory.go:382] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I0919 10:31:26.198331  108479 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.256601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
W0919 10:31:26.198652  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:31:26.199806  108479 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (883.103µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.200269  108479 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 10:31:26.200299  108479 factory.go:313] Using predicates from algorithm provider 'DefaultProvider'
I0919 10:31:26.200336  108479 factory.go:328] Using priorities from algorithm provider 'DefaultProvider'
I0919 10:31:26.200348  108479 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 10:31:26.375422  108479 request.go:538] Throttling request took 174.724113ms, request: POST:http://127.0.0.1:38549/api/v1/namespaces/kube-system/configmaps
I0919 10:31:26.380713  108479 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (4.936817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
W0919 10:31:26.381113  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:31:26.575346  108479 request.go:538] Throttling request took 193.96721ms, request: GET:http://127.0.0.1:38549/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I0919 10:31:26.577048  108479 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (1.393253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.578233  108479 factory.go:304] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I0919 10:31:26.578280  108479 factory.go:382] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I0919 10:31:26.775297  108479 request.go:538] Throttling request took 196.763371ms, request: DELETE:http://127.0.0.1:38549/api/v1/nodes
I0919 10:31:26.777266  108479 httplog.go:90] DELETE /api/v1/nodes: (1.655826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
I0919 10:31:26.777431  108479 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0919 10:31:26.778667  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.062197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56012]
--- FAIL: TestSchedulerCreationFromConfigMap (4.07s)
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]
    scheduler_test.go:289: Expected predicates map[CheckNodeCondition:{}], got map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-102151.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m0s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0919 10:32:18.316507  108479 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (120.16s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190919-102151.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0919 10:33:08.376642  108479 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0919 10:33:08.376674  108479 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0919 10:33:08.376689  108479 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0919 10:33:08.376705  108479 master.go:259] Using reconciler: 
I0919 10:33:08.379090  108479 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.379328  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.379428  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.380131  108479 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0919 10:33:08.380223  108479 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.380238  108479 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0919 10:33:08.380494  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.380510  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.381462  108479 watch_cache.go:405] Replace watchCache (rev: 59365) 
I0919 10:33:08.381666  108479 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 10:33:08.381702  108479 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 10:33:08.381720  108479 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.381842  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.381864  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.382921  108479 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0919 10:33:08.382968  108479 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.382992  108479 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0919 10:33:08.383104  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.383131  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.383233  108479 watch_cache.go:405] Replace watchCache (rev: 59366) 
I0919 10:33:08.384013  108479 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0919 10:33:08.384048  108479 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0919 10:33:08.384222  108479 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.384369  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.384390  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.385089  108479 watch_cache.go:405] Replace watchCache (rev: 59366) 
I0919 10:33:08.385500  108479 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0919 10:33:08.385580  108479 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0919 10:33:08.385907  108479 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.386242  108479 watch_cache.go:405] Replace watchCache (rev: 59366) 
I0919 10:33:08.386541  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.386564  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.386586  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.387471  108479 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0919 10:33:08.387516  108479 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0919 10:33:08.387674  108479 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.388100  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.388126  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.388726  108479 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0919 10:33:08.388911  108479 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0919 10:33:08.388916  108479 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.389391  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.389420  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.389994  108479 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0919 10:33:08.390162  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.390286  108479 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0919 10:33:08.390362  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.390383  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.391010  108479 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0919 10:33:08.391125  108479 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.391229  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.391226  108479 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0919 10:33:08.391248  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.391972  108479 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0919 10:33:08.392039  108479 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0919 10:33:08.392189  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.392290  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.392305  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.392905  108479 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0919 10:33:08.392924  108479 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0919 10:33:08.393099  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.393246  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.393263  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.393752  108479 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0919 10:33:08.393879  108479 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.394082  108479 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0919 10:33:08.394398  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.394429  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.394464  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.394585  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.394925  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.395056  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.395109  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.395120  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.395111  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.395651  108479 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0919 10:33:08.395864  108479 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0919 10:33:08.395861  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.395983  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.395998  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.397323  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.398631  108479 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0919 10:33:08.398673  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.398712  108479 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0919 10:33:08.398784  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.398796  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.399993  108479 watch_cache.go:405] Replace watchCache (rev: 59367) 
I0919 10:33:08.404585  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.404635  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.405334  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.405476  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.405546  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.406127  108479 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0919 10:33:08.406257  108479 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0919 10:33:08.406259  108479 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0919 10:33:08.406766  108479 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.406961  108479 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.407588  108479 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.408537  108479 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.409122  108479 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.409269  108479 watch_cache.go:405] Replace watchCache (rev: 59369) 
I0919 10:33:08.409698  108479 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.410023  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.410130  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.410614  108479 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.411023  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.411453  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.411668  108479 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.412374  108479 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.412938  108479 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.413478  108479 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.413717  108479 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.414393  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.414607  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.414750  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.414889  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.415019  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.415144  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.415293  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.416302  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.416580  108479 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.417329  108479 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.418804  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.419037  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.419350  108479 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.420149  108479 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.420442  108479 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.421236  108479 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.421921  108479 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.422484  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.423296  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.423517  108479 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.423615  108479 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0919 10:33:08.423639  108479 master.go:461] Enabling API group "authentication.k8s.io".
I0919 10:33:08.423659  108479 master.go:461] Enabling API group "authorization.k8s.io".
I0919 10:33:08.423801  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.423962  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.423996  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.424926  108479 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 10:33:08.425446  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.425718  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.425834  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.425477  108479 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 10:33:08.426672  108479 watch_cache.go:405] Replace watchCache (rev: 59375) 
I0919 10:33:08.426941  108479 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 10:33:08.427005  108479 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 10:33:08.427601  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.427827  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.427963  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.427844  108479 watch_cache.go:405] Replace watchCache (rev: 59376) 
I0919 10:33:08.429607  108479 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0919 10:33:08.429635  108479 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0919 10:33:08.429831  108479 master.go:461] Enabling API group "autoscaling".
I0919 10:33:08.430025  108479 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.430267  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.430354  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.431733  108479 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0919 10:33:08.431870  108479 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0919 10:33:08.431918  108479 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.432092  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.432120  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.432332  108479 watch_cache.go:405] Replace watchCache (rev: 59379) 
I0919 10:33:08.433556  108479 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0919 10:33:08.433584  108479 master.go:461] Enabling API group "batch".
I0919 10:33:08.433612  108479 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0919 10:33:08.433766  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.433921  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.433939  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.434279  108479 watch_cache.go:405] Replace watchCache (rev: 59379) 
I0919 10:33:08.434527  108479 watch_cache.go:405] Replace watchCache (rev: 59379) 
I0919 10:33:08.434625  108479 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0919 10:33:08.434662  108479 master.go:461] Enabling API group "certificates.k8s.io".
I0919 10:33:08.434797  108479 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0919 10:33:08.434830  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.435040  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.435069  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.435836  108479 watch_cache.go:405] Replace watchCache (rev: 59379) 
I0919 10:33:08.436232  108479 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 10:33:08.436265  108479 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 10:33:08.436648  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.436897  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.437017  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.436943  108479 watch_cache.go:405] Replace watchCache (rev: 59379) 
I0919 10:33:08.439561  108479 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0919 10:33:08.439582  108479 master.go:461] Enabling API group "coordination.k8s.io".
I0919 10:33:08.439599  108479 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0919 10:33:08.439742  108479 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0919 10:33:08.440539  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.440793  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.441314  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.440882  108479 watch_cache.go:405] Replace watchCache (rev: 59380) 
I0919 10:33:08.442775  108479 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 10:33:08.442800  108479 master.go:461] Enabling API group "extensions".
I0919 10:33:08.442861  108479 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 10:33:08.442958  108479 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.443673  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.443758  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.444036  108479 watch_cache.go:405] Replace watchCache (rev: 59381) 
I0919 10:33:08.444751  108479 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0919 10:33:08.444781  108479 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0919 10:33:08.445223  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.445449  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.445577  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.446829  108479 watch_cache.go:405] Replace watchCache (rev: 59381) 
I0919 10:33:08.448070  108479 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0919 10:33:08.448097  108479 master.go:461] Enabling API group "networking.k8s.io".
I0919 10:33:08.448102  108479 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0919 10:33:08.448129  108479 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.448316  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.448337  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.449257  108479 watch_cache.go:405] Replace watchCache (rev: 59382) 
I0919 10:33:08.449394  108479 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0919 10:33:08.449414  108479 master.go:461] Enabling API group "node.k8s.io".
I0919 10:33:08.449552  108479 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.449582  108479 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0919 10:33:08.449669  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.449682  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.450811  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.450855  108479 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0919 10:33:08.450839  108479 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0919 10:33:08.451693  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.451824  108479 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.451934  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.451944  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.452922  108479 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0919 10:33:08.452948  108479 master.go:461] Enabling API group "policy".
I0919 10:33:08.452979  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.453048  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.453046  108479 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0919 10:33:08.453060  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.453762  108479 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 10:33:08.453804  108479 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 10:33:08.453969  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.454089  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.454120  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.454467  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.454521  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.454830  108479 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 10:33:08.454862  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.454879  108479 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 10:33:08.454964  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.454988  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.455756  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.455995  108479 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 10:33:08.456049  108479 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 10:33:08.456190  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.456313  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.456333  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.456839  108479 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 10:33:08.456885  108479 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 10:33:08.457011  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.457090  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.457102  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.457122  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.457971  108479 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0919 10:33:08.458224  108479 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0919 10:33:08.458128  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.458337  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.458376  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.458394  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.459395  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.459520  108479 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0919 10:33:08.459543  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.459599  108479 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0919 10:33:08.459628  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.459638  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.460309  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.460723  108479 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0919 10:33:08.460839  108479 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0919 10:33:08.460865  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.460969  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.460988  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.461841  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.461993  108479 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0919 10:33:08.462027  108479 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0919 10:33:08.462153  108479 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0919 10:33:08.462960  108479 watch_cache.go:405] Replace watchCache (rev: 59383) 
I0919 10:33:08.464278  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.464381  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.464514  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.465402  108479 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 10:33:08.465466  108479 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 10:33:08.465581  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.465721  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.465743  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.466334  108479 watch_cache.go:405] Replace watchCache (rev: 59384) 
I0919 10:33:08.466391  108479 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0919 10:33:08.466406  108479 master.go:461] Enabling API group "scheduling.k8s.io".
I0919 10:33:08.466435  108479 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0919 10:33:08.466533  108479 master.go:450] Skipping disabled API group "settings.k8s.io".
I0919 10:33:08.466718  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.466818  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.466837  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.467721  108479 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 10:33:08.467746  108479 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 10:33:08.468021  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.468968  108479 watch_cache.go:405] Replace watchCache (rev: 59385) 
I0919 10:33:08.469099  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.469229  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.469139  108479 watch_cache.go:405] Replace watchCache (rev: 59385) 
I0919 10:33:08.470445  108479 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 10:33:08.470472  108479 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 10:33:08.470485  108479 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.470582  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.470597  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.471599  108479 watch_cache.go:405] Replace watchCache (rev: 59386) 
I0919 10:33:08.472266  108479 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0919 10:33:08.472314  108479 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0919 10:33:08.472561  108479 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.472982  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.473108  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.473844  108479 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0919 10:33:08.473925  108479 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0919 10:33:08.474350  108479 watch_cache.go:405] Replace watchCache (rev: 59386) 
I0919 10:33:08.474535  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.474658  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.474779  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.476091  108479 watch_cache.go:405] Replace watchCache (rev: 59387) 
I0919 10:33:08.476126  108479 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0919 10:33:08.476151  108479 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0919 10:33:08.476308  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.476712  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.476736  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.477069  108479 watch_cache.go:405] Replace watchCache (rev: 59387) 
I0919 10:33:08.477511  108479 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0919 10:33:08.477540  108479 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0919 10:33:08.477745  108479 master.go:461] Enabling API group "storage.k8s.io".
I0919 10:33:08.478019  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.478250  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.478357  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.478360  108479 watch_cache.go:405] Replace watchCache (rev: 59387) 
I0919 10:33:08.479687  108479 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0919 10:33:08.479727  108479 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0919 10:33:08.479908  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.480009  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.480031  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.481961  108479 watch_cache.go:405] Replace watchCache (rev: 59388) 
I0919 10:33:08.483307  108479 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0919 10:33:08.483360  108479 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0919 10:33:08.483487  108479 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.484027  108479 watch_cache.go:405] Replace watchCache (rev: 59388) 
I0919 10:33:08.484717  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.484736  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.485365  108479 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0919 10:33:08.485444  108479 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0919 10:33:08.485597  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.485725  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.485750  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.486341  108479 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0919 10:33:08.486518  108479 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.486554  108479 watch_cache.go:405] Replace watchCache (rev: 59389) 
I0919 10:33:08.486628  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.486643  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.486749  108479 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0919 10:33:08.487935  108479 watch_cache.go:405] Replace watchCache (rev: 59389) 
I0919 10:33:08.488103  108479 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0919 10:33:08.488126  108479 master.go:461] Enabling API group "apps".
I0919 10:33:08.488168  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.488318  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.488358  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.488196  108479 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0919 10:33:08.489397  108479 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 10:33:08.489552  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.489429  108479 watch_cache.go:405] Replace watchCache (rev: 59390) 
I0919 10:33:08.489455  108479 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 10:33:08.490157  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.490262  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.490408  108479 watch_cache.go:405] Replace watchCache (rev: 59390) 
I0919 10:33:08.491196  108479 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 10:33:08.491243  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.491281  108479 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 10:33:08.491342  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.491365  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.491944  108479 watch_cache.go:405] Replace watchCache (rev: 59390) 
I0919 10:33:08.492206  108479 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0919 10:33:08.492329  108479 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0919 10:33:08.492636  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.492935  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.493082  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.493552  108479 watch_cache.go:405] Replace watchCache (rev: 59390) 
I0919 10:33:08.493911  108479 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0919 10:33:08.493932  108479 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0919 10:33:08.493954  108479 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.493979  108479 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0919 10:33:08.494247  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:08.494275  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:08.495004  108479 watch_cache.go:405] Replace watchCache (rev: 59390) 
I0919 10:33:08.495020  108479 store.go:1342] Monitoring events count at <storage-prefix>//events
I0919 10:33:08.495046  108479 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0919 10:33:08.495060  108479 master.go:461] Enabling API group "events.k8s.io".
I0919 10:33:08.495770  108479 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.496062  108479 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.496518  108479 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.496675  108479 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.497419  108479 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.497650  108479 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.497954  108479 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.498136  108479 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.498311  108479 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.498480  108479 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.498768  108479 watch_cache.go:405] Replace watchCache (rev: 59391) 
I0919 10:33:08.499633  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.499916  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.501061  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.501507  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.502539  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.502904  108479 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.504294  108479 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.504634  108479 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.505551  108479 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.505940  108479 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:33:08.506012  108479 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0919 10:33:08.506961  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.507403  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.507700  108479 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.508871  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.509673  108479 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.510866  108479 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.511154  108479 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.512146  108479 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.513394  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.513708  108479 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.514506  108479 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:33:08.514586  108479 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0919 10:33:08.515427  108479 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.515861  108479 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.516734  108479 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.517522  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.517899  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.518540  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.519523  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.520202  108479 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.520787  108479 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.521901  108479 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.522719  108479 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:33:08.522782  108479 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0919 10:33:08.523491  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.524151  108479 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:33:08.524246  108479 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0919 10:33:08.525133  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.525729  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.526012  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.526580  108479 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.527013  108479 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.527749  108479 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.527819  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.528429  108479 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:33:08.528497  108479 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0919 10:33:08.529432  108479 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.529578  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.529742  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.529968  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.529991  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.530068  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.530355  108479 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.530586  108479 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.530615  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.531434  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.531721  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.532025  108479 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.533051  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.533386  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.533680  108479 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.534623  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.534943  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.535257  108479 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0919 10:33:08.535327  108479 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0919 10:33:08.535339  108479 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0919 10:33:08.536392  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.537157  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.538123  108479 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.539024  108479 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.539953  108479 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"2827d84e-6ac5-4d5a-a27a-d1a42961103f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0919 10:33:08.543948  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.543984  108479 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0919 10:33:08.543997  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.544008  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.544018  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.544026  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.544058  108479 httplog.go:90] GET /healthz: (294.441µs) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:08.545369  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.428029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52848]
I0919 10:33:08.549082  108479 httplog.go:90] GET /api/v1/services: (2.612545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52848]
I0919 10:33:08.556478  108479 httplog.go:90] GET /api/v1/services: (1.057615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52848]
I0919 10:33:08.558608  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.558639  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.558651  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.558658  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.558663  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.558693  108479 httplog.go:90] GET /healthz: (237.232µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52848]
I0919 10:33:08.560308  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.525887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:08.560429  108479 httplog.go:90] GET /api/v1/services: (1.008243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52848]
I0919 10:33:08.561130  108479 httplog.go:90] GET /api/v1/services: (980.344µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52850]
I0919 10:33:08.563929  108479 httplog.go:90] POST /api/v1/namespaces: (1.372344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:08.565149  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (845.926µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:08.566959  108479 httplog.go:90] POST /api/v1/namespaces: (1.297967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:08.567904  108479 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (642.045µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:08.570654  108479 httplog.go:90] POST /api/v1/namespaces: (1.928503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:08.644963  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.645032  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.645042  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.645048  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.645054  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.645096  108479 httplog.go:90] GET /healthz: (272.922µs) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:08.659480  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.659521  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.659533  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.659542  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.659551  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.659578  108479 httplog.go:90] GET /healthz: (284.042µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:08.744877  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.744910  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.744920  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.744926  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.744931  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.744963  108479 httplog.go:90] GET /healthz: (203.202µs) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:08.759418  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.759451  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.759460  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.759466  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.759472  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.759532  108479 httplog.go:90] GET /healthz: (241.648µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:08.844975  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.845011  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.845029  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.845037  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.845043  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.845076  108479 httplog.go:90] GET /healthz: (227.166µs) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:08.859608  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.859647  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.859659  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.859668  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.859679  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.859717  108479 httplog.go:90] GET /healthz: (266.537µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:08.931661  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.931675  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.933435  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.933549  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.933593  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.933958  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.944858  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.944992  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.945021  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.945043  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.945160  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.945304  108479 httplog.go:90] GET /healthz: (586.637µs) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:08.945806  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.945818  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.945834  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.945848  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.945987  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.946053  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:08.959438  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:08.959473  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:08.959486  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:08.959493  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:08.959499  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:08.959523  108479 httplog.go:90] GET /healthz: (214.139µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:09.044911  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:09.044945  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.044954  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.044960  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.044965  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.044989  108479 httplog.go:90] GET /healthz: (204.626µs) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:09.059578  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:09.059608  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.059616  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.059623  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.059630  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.059665  108479 httplog.go:90] GET /healthz: (233.586µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:09.135619  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.145359  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:09.145394  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.145403  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.145410  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.145430  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.145471  108479 httplog.go:90] GET /healthz: (315.15µs) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:09.148792  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.159434  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:09.159466  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.159475  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.159481  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.159486  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.159538  108479 httplog.go:90] GET /healthz: (252.71µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:09.245045  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:09.245081  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.245094  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.245101  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.245108  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.245149  108479 httplog.go:90] GET /healthz: (254.466µs) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:09.259477  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:09.259515  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.259524  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.259546  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.259552  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.259590  108479 httplog.go:90] GET /healthz: (241.39µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:09.344843  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:09.344889  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.344902  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.344911  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.344918  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.344947  108479 httplog.go:90] GET /healthz: (223.905µs) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:09.359551  108479 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0919 10:33:09.359586  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.359638  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.359649  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.359656  108479 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.359697  108479 httplog.go:90] GET /healthz: (261.5µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:09.376712  108479 client.go:361] parsed scheme: "endpoint"
I0919 10:33:09.376798  108479 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0919 10:33:09.446088  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.446357  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.446467  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.446553  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.446782  108479 httplog.go:90] GET /healthz: (1.960286ms) 0 [Go-http-client/1.1 127.0.0.1:52846]
I0919 10:33:09.460153  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.460193  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.460200  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.460206  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.460250  108479 httplog.go:90] GET /healthz: (913.687µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:09.527944  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.529856  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.529958  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.530081  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.530111  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.530254  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.530857  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.545524  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.483522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:09.546071  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.050581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52848]
I0919 10:33:09.546446  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.546808  108479 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.105097ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.547317  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.014573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:09.547539  108479 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.027215ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52848]
I0919 10:33:09.547783  108479 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0919 10:33:09.547884  108479 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0919 10:33:09.547954  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0919 10:33:09.549418  108479 httplog.go:90] GET /healthz: (3.823061ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:09.548695  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (977.937µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.549494  108479 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.710528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52846]
I0919 10:33:09.549889  108479 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0919 10:33:09.551000  108479 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (837.708µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.551002  108479 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (3.063804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52848]
I0919 10:33:09.551563  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.663532ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.553409  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.432817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.553434  108479 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.931761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52848]
I0919 10:33:09.553693  108479 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0919 10:33:09.553713  108479 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0919 10:33:09.554716  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (834.405µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.555944  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (898.242µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.557071  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (739.177µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.558494  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (736.29µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.559806  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (915.224µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.559922  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.559941  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:09.559964  108479 httplog.go:90] GET /healthz: (813.478µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.561824  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.598897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.561988  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0919 10:33:09.562891  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (750.283µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.565356  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.485548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.565583  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0919 10:33:09.566550  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (715.445µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.568670  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.737819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.568872  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0919 10:33:09.570040  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (902.951µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.572641  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.702531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.572934  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0919 10:33:09.574101  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (838.606µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.576120  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.572119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.576418  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0919 10:33:09.577511  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (914.856µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.579399  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.504953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.579563  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0919 10:33:09.581529  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.755649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.583382  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.446598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.583596  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0919 10:33:09.584610  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (813.089µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.586890  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.727263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.587354  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0919 10:33:09.588310  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (761.249µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.590415  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.681402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.590700  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0919 10:33:09.591843  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (943.073µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.593871  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.587211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.594222  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0919 10:33:09.595693  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.288408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.598077  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.918141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.598301  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0919 10:33:09.600210  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.673938ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.603089  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.41633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.603577  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0919 10:33:09.604818  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.047275ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.607207  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.794936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.608017  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0919 10:33:09.609674  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (907.933µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.611728  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.62934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.611969  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0919 10:33:09.613003  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (774.051µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.615675  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.156918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.615912  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0919 10:33:09.616943  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (833.091µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.618838  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.419574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.619066  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0919 10:33:09.620116  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (821.765µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.621930  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.622148  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0919 10:33:09.623215  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (753.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.625054  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.442868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.625279  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 10:33:09.626383  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (848.662µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.628305  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.528117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.628458  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0919 10:33:09.629310  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (683.925µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.631352  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.734542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.631683  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0919 10:33:09.632752  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (872.094µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.634705  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.701697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.634973  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0919 10:33:09.636221  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (830.552µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.638308  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.520179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.638711  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0919 10:33:09.639820  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (821.105µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.641817  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.492904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.642053  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0919 10:33:09.643304  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (954.853µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.645200  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.438636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.645465  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.645495  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:09.645556  108479 httplog.go:90] GET /healthz: (923.843µs) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:09.645626  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0919 10:33:09.646631  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (803.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.648768  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.614591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.649110  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0919 10:33:09.650384  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (926.859µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.652551  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.690483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.652861  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0919 10:33:09.653767  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (694.75µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.655465  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.361082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.655834  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0919 10:33:09.656979  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (927.342µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.658644  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.241619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.658935  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 10:33:09.660046  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.660079  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:09.660121  108479 httplog.go:90] GET /healthz: (987.076µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.660434  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.191176ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.662394  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.456237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.662647  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 10:33:09.663635  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (806.059µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.665555  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.416952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.665867  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 10:33:09.667109  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.030456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.669908  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.982086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.670265  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 10:33:09.672046  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.542899ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.674034  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.44762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.674334  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 10:33:09.675251  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (752.543µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.677094  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.510084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.677421  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 10:33:09.678429  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (807.21µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.680613  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.791238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.680811  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 10:33:09.681849  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (817.98µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.683966  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.676116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.684249  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 10:33:09.685496  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.015347ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.687915  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.00962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.688509  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 10:33:09.690484  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.736225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.692384  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.490336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.692637  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 10:33:09.693773  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (951.434µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.696332  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.185623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.696530  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0919 10:33:09.697756  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (849.833µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.699818  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.700038  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 10:33:09.700958  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (742.233µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.703761  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.586462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.703988  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0919 10:33:09.705227  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (886.451µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.707472  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.827426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.707760  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 10:33:09.709149  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.113258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.711080  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.711364  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 10:33:09.712491  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (892.499µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.715001  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.505646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.715299  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 10:33:09.716751  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.201732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.718929  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.540397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.719217  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 10:33:09.720229  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (712.124µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.722094  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.722361  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 10:33:09.723341  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (786.339µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.724990  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.280365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.725286  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0919 10:33:09.726363  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (815.118µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.728098  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.299345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.728434  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 10:33:09.729751  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.006507ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.731530  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.381455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.731773  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0919 10:33:09.732974  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (977.456µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.734572  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.216248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.734775  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 10:33:09.735739  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (752.08µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.737295  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.211422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.737475  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 10:33:09.745303  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.210184ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.745391  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.745625  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:09.745791  108479 httplog.go:90] GET /healthz: (1.221207ms) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:09.760422  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.760458  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:09.760523  108479 httplog.go:90] GET /healthz: (1.11942ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.766121  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.87865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.766471  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 10:33:09.785468  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.160709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.806454  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.284151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.807757  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 10:33:09.825529  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.270633ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.845817  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.845852  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:09.846019  108479 httplog.go:90] GET /healthz: (1.386455ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:09.846826  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.493226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.847892  108479 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 10:33:09.860493  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.860525  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:09.860573  108479 httplog.go:90] GET /healthz: (1.178343ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.865400  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.181445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.886642  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.392558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.886915  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0919 10:33:09.906121  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.878412ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.926258  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.975624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.926641  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0919 10:33:09.931784  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.931951  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.933607  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.933771  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.933798  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.934286  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.945733  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.945913  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:09.946255  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.945783  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.534185ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:09.946262  108479 httplog.go:90] GET /healthz: (1.423537ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:09.945972  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.945997  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.945997  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.946086  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.946281  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:09.960442  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:09.960512  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:09.960575  108479 httplog.go:90] GET /healthz: (1.181025ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.967393  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.268836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:09.967769  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0919 10:33:09.985950  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.482181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.006844  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.589036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.007124  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0919 10:33:10.025837  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.616374ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.045647  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.045717  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.045822  108479 httplog.go:90] GET /healthz: (1.052038ms) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:10.047043  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.874062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.047369  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0919 10:33:10.060684  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.060719  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.060858  108479 httplog.go:90] GET /healthz: (1.393883ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.065422  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.254094ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.086834  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.507561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.087127  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0919 10:33:10.105612  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.461337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.126275  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.051488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.126919  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0919 10:33:10.135784  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.145521  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.396987ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.145706  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.145732  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.145759  108479 httplog.go:90] GET /healthz: (1.057591ms) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:10.149003  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.160199  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.160230  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.160276  108479 httplog.go:90] GET /healthz: (934.083µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.166098  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.988884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.166413  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0919 10:33:10.185425  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.204149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.206963  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.401483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.207385  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0919 10:33:10.225312  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.290133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.246401  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.139779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.246558  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.246591  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.246621  108479 httplog.go:90] GET /healthz: (1.931099ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:10.246647  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0919 10:33:10.260223  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.260352  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.260518  108479 httplog.go:90] GET /healthz: (1.194248ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.265425  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.348991ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.287540  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.601179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.287986  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0919 10:33:10.305582  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.40829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.326117  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.944424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.326325  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0919 10:33:10.345581  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.303337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.345870  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.345986  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.346243  108479 httplog.go:90] GET /healthz: (1.532872ms) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:10.360080  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.360110  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.360242  108479 httplog.go:90] GET /healthz: (934.539µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.366116  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.735776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.366416  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0919 10:33:10.385671  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.516844ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.406611  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.319563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.407022  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0919 10:33:10.425269  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.211795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.445834  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.445867  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.445922  108479 httplog.go:90] GET /healthz: (1.229782ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:10.447318  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.107083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.447633  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0919 10:33:10.460643  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.460825  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.460982  108479 httplog.go:90] GET /healthz: (1.538521ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.465360  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.273055ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.486405  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.249544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.486716  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0919 10:33:10.505574  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.444933ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.526579  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.335858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.526932  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0919 10:33:10.528614  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.530031  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.530255  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.530114  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.530278  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.530475  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.531040  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.545505  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.327971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.545677  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.545901  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.546091  108479 httplog.go:90] GET /healthz: (1.291268ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:10.560679  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.560716  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.560787  108479 httplog.go:90] GET /healthz: (1.346099ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.566728  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.657924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.567107  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0919 10:33:10.585586  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.495174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.606470  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.097656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.606754  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0919 10:33:10.625705  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.609502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.646086  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.646239  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.646335  108479 httplog.go:90] GET /healthz: (1.672403ms) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:10.646394  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.261467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.648893  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0919 10:33:10.659952  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.659989  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.660030  108479 httplog.go:90] GET /healthz: (864.874µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.665719  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.62506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.686553  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.249024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.686801  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0919 10:33:10.705356  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.173663ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.726700  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.52226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.727131  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0919 10:33:10.745562  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.745599  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.745636  108479 httplog.go:90] GET /healthz: (926.743µs) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:10.745646  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.487919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:10.760327  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.760357  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.760393  108479 httplog.go:90] GET /healthz: (1.036988ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.766305  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.228911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.766549  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0919 10:33:10.785797  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.539883ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.806381  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.10985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.806772  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0919 10:33:10.825862  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.516706ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.846279  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.846311  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.846379  108479 httplog.go:90] GET /healthz: (1.655715ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:10.846770  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.609267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.847057  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0919 10:33:10.860397  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.860440  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.860495  108479 httplog.go:90] GET /healthz: (1.140061ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.865041  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (936.088µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.886055  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.909805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.886367  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0919 10:33:10.905363  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.101902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.926947  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.821453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.927252  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0919 10:33:10.932082  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.932091  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.933986  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.934012  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.933987  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.934508  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.945622  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.945649  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.945675  108479 httplog.go:90] GET /healthz: (962.29µs) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:10.945841  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.684123ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.946491  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.947012  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.947028  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.947031  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.947057  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.947130  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:10.960781  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:10.960808  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:10.960839  108479 httplog.go:90] GET /healthz: (1.319363ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.966068  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.018294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:10.966464  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0919 10:33:10.985967  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.829638ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.006568  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.424598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.006808  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0919 10:33:11.026070  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.920213ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.045975  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.046036  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.046082  108479 httplog.go:90] GET /healthz: (1.332503ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:11.046270  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.150902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.046518  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0919 10:33:11.060682  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.060824  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.060926  108479 httplog.go:90] GET /healthz: (1.503361ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.065237  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.150638ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.086657  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.34477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.086937  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0919 10:33:11.105612  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.438726ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.126348  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.096082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.126634  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0919 10:33:11.136000  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.145763  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.145809  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.145903  108479 httplog.go:90] GET /healthz: (1.144928ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:11.145912  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.757962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.149218  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.160312  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.160346  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.160383  108479 httplog.go:90] GET /healthz: (1.080334ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.166294  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.211991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.166587  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0919 10:33:11.185611  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.278497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.206784  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.432006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.207123  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0919 10:33:11.225718  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.555336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.245890  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.245924  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.245962  108479 httplog.go:90] GET /healthz: (1.312252ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:11.246246  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.088791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.246453  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0919 10:33:11.260900  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.260935  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.260982  108479 httplog.go:90] GET /healthz: (1.171798ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.265319  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.209137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.286544  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.363765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.286778  108479 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0919 10:33:11.305760  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.527223ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.307821  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.702794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.326370  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.216303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.326582  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0919 10:33:11.345786  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.346011  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.345855  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.590609ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.346203  108479 httplog.go:90] GET /healthz: (1.511665ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:11.347997  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.199036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.360389  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.360616  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.360814  108479 httplog.go:90] GET /healthz: (1.479364ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.366017  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.98801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.366296  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 10:33:11.386283  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.622036ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.388713  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.57614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.406351  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.182987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.406847  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 10:33:11.425413  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.163355ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.427132  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.159533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.445745  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.445782  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.445857  108479 httplog.go:90] GET /healthz: (1.195896ms) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:11.446498  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.252188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.446828  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 10:33:11.460454  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.460488  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.460539  108479 httplog.go:90] GET /healthz: (1.261955ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.465411  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.306113ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.467291  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.371419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.487101  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.585983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.487420  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 10:33:11.505670  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.273576ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.507722  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.641627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.526805  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.43071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.527090  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 10:33:11.528807  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.530464  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.530492  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.530496  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.530516  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.530625  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.531192  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.545550  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.356771ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.546024  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.546644  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.546735  108479 httplog.go:90] GET /healthz: (1.946932ms) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:11.548290  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.389245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.562484  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.562518  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.562567  108479 httplog.go:90] GET /healthz: (3.168418ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.565956  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.934029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.566723  108479 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 10:33:11.585417  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.080576ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.587110  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.207813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.607027  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.864867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.607353  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0919 10:33:11.625239  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.137651ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.627343  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.504154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.645841  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.645876  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.645909  108479 httplog.go:90] GET /healthz: (1.228579ms) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:11.646369  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.113469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.646614  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0919 10:33:11.660432  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.660469  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.660506  108479 httplog.go:90] GET /healthz: (1.146514ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.665266  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.203252ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.666775  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.117584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.686190  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.906973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.686486  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0919 10:33:11.705569  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.408606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.707147  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.14294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.726040  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.933386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.726347  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0919 10:33:11.745726  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.745762  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.745796  108479 httplog.go:90] GET /healthz: (1.086565ms) 0 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:11.745873  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.701465ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:11.747803  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.212496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.760709  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.760756  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.760794  108479 httplog.go:90] GET /healthz: (1.497308ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.766110  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.946711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.766498  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0919 10:33:11.785583  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.455956ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.787506  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.188338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.806507  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.175888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.806789  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0919 10:33:11.825201  108479 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.074085ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.827832  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.939298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.845910  108479 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0919 10:33:11.846045  108479 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0919 10:33:11.846081  108479 httplog.go:90] GET /healthz: (1.392601ms) 0 [Go-http-client/1.1 127.0.0.1:52864]
I0919 10:33:11.846669  108479 httplog.go:90] GET /api/v1/namespaces/default: (2.269249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:11.847000  108479 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.82416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.847947  108479 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0919 10:33:11.848636  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.639091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:11.850193  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.151948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:11.861151  108479 httplog.go:90] GET /healthz: (1.801974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.862900  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.259298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.865095  108479 httplog.go:90] POST /api/v1/namespaces: (1.526961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.866599  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.054758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.871781  108479 httplog.go:90] POST /api/v1/namespaces/default/services: (3.779036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.873220  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (849.495µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.875387  108479 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.704481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.932315  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.932321  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.934153  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.934155  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.934204  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.934665  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.946040  108479 httplog.go:90] GET /healthz: (1.260918ms) 200 [Go-http-client/1.1 127.0.0.1:52862]
I0919 10:33:11.946627  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.947313  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.947229  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.947241  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.947252  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:11.947224  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
W0919 10:33:11.947952  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948053  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948161  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948292  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948396  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948462  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948542  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948591  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948647  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948762  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948857  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:11.948969  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:33:11.949128  108479 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0919 10:33:11.949230  108479 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0919 10:33:11.949559  108479 shared_informer.go:197] Waiting for caches to sync for scheduler
I0919 10:33:11.950007  108479 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 10:33:11.950131  108479 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0919 10:33:11.951128  108479 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (521.732µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52862]
I0919 10:33:11.952099  108479 get.go:251] Starting watch for /api/v1/pods, rv=59367 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m48s
I0919 10:33:12.049929  108479 shared_informer.go:227] caches populated
I0919 10:33:12.050112  108479 shared_informer.go:204] Caches are synced for scheduler 
I0919 10:33:12.050586  108479 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.050611  108479 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.050618  108479 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.050638  108479 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.050733  108479 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.050752  108479 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.050858  108479 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.050872  108479 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.050911  108479 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.050920  108479 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.051117  108479 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.051131  108479 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.052366  108479 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (430.243µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52914]
I0919 10:33:12.052438  108479 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (566.458µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52906]
I0919 10:33:12.052710  108479 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (250.534µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52908]
I0919 10:33:12.053015  108479 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (1.412481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52864]
I0919 10:33:12.053098  108479 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (292.388µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52910]
I0919 10:33:12.053328  108479 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59388 labels= fields= timeout=9m19s
I0919 10:33:12.053443  108479 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59389 labels= fields= timeout=7m55s
I0919 10:33:12.053528  108479 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.053549  108479 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.053580  108479 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (539.031µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52912]
I0919 10:33:12.053760  108479 get.go:251] Starting watch for /api/v1/nodes, rv=59367 labels= fields= timeout=9m59s
I0919 10:33:12.053902  108479 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59367 labels= fields= timeout=5m42s
I0919 10:33:12.054106  108479 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (361.863µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52910]
I0919 10:33:12.054339  108479 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59369 labels= fields= timeout=8m6s
I0919 10:33:12.054460  108479 get.go:251] Starting watch for /api/v1/services, rv=59611 labels= fields= timeout=6m20s
I0919 10:33:12.054707  108479 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.054717  108479 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.054728  108479 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.054735  108479 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.055078  108479 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59367 labels= fields= timeout=6m20s
I0919 10:33:12.055334  108479 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.055354  108479 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.055434  108479 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (355.363µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52918]
I0919 10:33:12.056082  108479 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (627.229µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52920]
I0919 10:33:12.056149  108479 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (275.249µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52918]
I0919 10:33:12.056287  108479 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59387 labels= fields= timeout=5m8s
I0919 10:33:12.056628  108479 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59383 labels= fields= timeout=9m19s
I0919 10:33:12.056724  108479 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59386 labels= fields= timeout=7m7s
I0919 10:33:12.136241  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.149398  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.150563  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150587  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150594  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150601  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150607  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150613  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150620  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150639  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150646  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150675  108479 shared_informer.go:227] caches populated
I0919 10:33:12.150685  108479 shared_informer.go:227] caches populated
I0919 10:33:12.153559  108479 httplog.go:90] POST /api/v1/namespaces: (2.177137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52924]
I0919 10:33:12.153944  108479 node_lifecycle_controller.go:327] Sending events to api server.
I0919 10:33:12.154031  108479 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0919 10:33:12.154050  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:33:12.154125  108479 taint_manager.go:162] Sending events to api server.
I0919 10:33:12.154208  108479 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0919 10:33:12.154228  108479 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0919 10:33:12.154237  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0919 10:33:12.154273  108479 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0919 10:33:12.154318  108479 node_lifecycle_controller.go:488] Starting node controller
I0919 10:33:12.154340  108479 shared_informer.go:197] Waiting for caches to sync for taint
I0919 10:33:12.154507  108479 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.154523  108479 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.155654  108479 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (846.393µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52924]
I0919 10:33:12.156476  108479 get.go:251] Starting watch for /api/v1/namespaces, rv=59615 labels= fields= timeout=8m15s
I0919 10:33:12.253268  108479 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0919 10:33:12.253313  108479 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0919 10:33:12.253367  108479 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0919 10:33:12.253373  108479 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0919 10:33:12.253382  108479 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0919 10:33:12.253387  108479 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0919 10:33:12.253472  108479 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"1d5d768c-1d65-44e1-9325-cc4c34a61171", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0919 10:33:12.253538  108479 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"e312d537-98f3-418e-8ecc-64a50c6bf85f", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0919 10:33:12.253551  108479 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"f43a1532-f188-4af7-aa0d-7422193e2b60", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0919 10:33:12.254569  108479 shared_informer.go:227] caches populated
I0919 10:33:12.254641  108479 shared_informer.go:227] caches populated
I0919 10:33:12.254650  108479 shared_informer.go:227] caches populated
I0919 10:33:12.254655  108479 shared_informer.go:227] caches populated
I0919 10:33:12.254661  108479 shared_informer.go:227] caches populated
I0919 10:33:12.254667  108479 shared_informer.go:227] caches populated
I0919 10:33:12.254879  108479 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.254907  108479 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.255122  108479 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.255311  108479 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.255164  108479 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.255424  108479 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0919 10:33:12.256314  108479 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (428.362µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52930]
I0919 10:33:12.256490  108479 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (623.431µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52928]
I0919 10:33:12.256323  108479 httplog.go:90] POST /api/v1/namespaces/default/events: (2.290199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:12.256374  108479 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (777.303µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52926]
I0919 10:33:12.257186  108479 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59379 labels= fields= timeout=9m4s
I0919 10:33:12.257375  108479 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59389 labels= fields= timeout=9m24s
I0919 10:33:12.258208  108479 get.go:251] Starting watch for /api/v1/pods, rv=59367 labels= fields= timeout=9m57s
I0919 10:33:12.258868  108479 httplog.go:90] POST /api/v1/namespaces/default/events: (1.914384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:12.260993  108479 httplog.go:90] POST /api/v1/namespaces/default/events: (1.642334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:12.354548  108479 shared_informer.go:227] caches populated
I0919 10:33:12.354585  108479 shared_informer.go:204] Caches are synced for taint 
I0919 10:33:12.354622  108479 taint_manager.go:186] Starting NoExecuteTaintManager
I0919 10:33:12.354845  108479 shared_informer.go:227] caches populated
I0919 10:33:12.354859  108479 shared_informer.go:227] caches populated
I0919 10:33:12.354866  108479 shared_informer.go:227] caches populated
I0919 10:33:12.354872  108479 shared_informer.go:227] caches populated
I0919 10:33:12.354884  108479 shared_informer.go:227] caches populated
I0919 10:33:12.354890  108479 shared_informer.go:227] caches populated
I0919 10:33:12.354896  108479 shared_informer.go:227] caches populated
I0919 10:33:12.354902  108479 shared_informer.go:227] caches populated
I0919 10:33:12.358390  108479 httplog.go:90] POST /api/v1/nodes: (2.23443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.358729  108479 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0919 10:33:12.358822  108479 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0919 10:33:12.358921  108479 taint_manager.go:438] Updating known taints on node node-0: []
I0919 10:33:12.360924  108479 httplog.go:90] POST /api/v1/nodes: (1.772909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.361039  108479 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0919 10:33:12.361125  108479 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0919 10:33:12.361147  108479 taint_manager.go:438] Updating known taints on node node-1: []
I0919 10:33:12.362976  108479 httplog.go:90] POST /api/v1/nodes: (1.521368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.363301  108479 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0919 10:33:12.363314  108479 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 10:33:12.363331  108479 taint_manager.go:438] Updating known taints on node node-2: []
I0919 10:33:12.365347  108479 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/pods: (1.811251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.366097  108479 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734", Name:"testpod-2"}
I0919 10:33:12.366261  108479 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2
I0919 10:33:12.366291  108479 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2
I0919 10:33:12.366671  108479 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2", node "node-2"
I0919 10:33:12.366695  108479 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2", node "node-2": all PVCs bound and nothing to do
I0919 10:33:12.366752  108479 factory.go:606] Attempting to bind testpod-2 to node-2
I0919 10:33:12.368702  108479 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/pods/testpod-2/binding: (1.667829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.368910  108479 scheduler.go:662] pod taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2 is bound successfully on node "node-2", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0919 10:33:12.369077  108479 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734", Name:"testpod-2"}
I0919 10:33:12.370941  108479 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/events: (1.716376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.467678  108479 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/pods/testpod-2: (1.618416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.469670  108479 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/pods/testpod-2: (1.424268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.471116  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.089894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.473800  108479 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.217886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.474594  108479 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (455.767µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.477543  108479 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.140766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.477807  108479 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 10:33:12.473933676 +0000 UTC m=+312.309058476,}] Taint to Node node-2
I0919 10:33:12.477845  108479 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0919 10:33:12.528981  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.530770  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.530791  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.530960  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.530936  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.530947  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.531377  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.576445  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.853014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.676366  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.786001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.776379  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.790625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.876124  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.607414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:12.932624  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.932755  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.934405  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.934428  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.934408  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.934866  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.946793  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.947496  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.947686  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.947701  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.947893  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.947927  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:12.976269  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.694498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.053120  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.053513  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.054098  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.054579  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.056064  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.056603  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.077238  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.448482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.136554  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.149589  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.176306  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.808629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.257202  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.276387  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.782456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.376240  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.750504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.476420  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.892336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.529512  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.531068  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.531232  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.531081  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.531215  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.531262  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.531544  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.576820  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.316455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.676264  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.68458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.776440  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.863943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.876336  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.764687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:13.932841  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.932936  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.934557  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.934561  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.934605  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.935092  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.946997  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.947677  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.947769  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.947844  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.948015  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.948020  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:13.976272  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.522351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.053304  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.053699  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.054264  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.054739  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.056277  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.056787  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.076271  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.65794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.136816  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.149778  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.176353  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.769585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.257475  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.276087  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.506603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.375896  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.424554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.476943  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.459088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.529742  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.531389  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.531444  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.531473  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.531681  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.531693  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.531877  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.578814  108479 httplog.go:90] GET /api/v1/nodes/node-2: (4.279042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.676214  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.6558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.776698  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.73515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.879070  108479 httplog.go:90] GET /api/v1/nodes/node-2: (3.944827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:14.933024  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.933024  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.934723  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.934763  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.934767  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.935285  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.947316  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.947831  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.947860  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.948026  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.948255  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.948257  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:14.976562  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.984065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.053426  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.053889  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.054380  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.054864  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.056405  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.057880  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.076508  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.8949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.137001  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.149959  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.176516  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.91259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.257656  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.276459  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.842685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.376335  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.771895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.418933  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.682557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0919 10:33:15.420664  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.18702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0919 10:33:15.422512  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.220339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0919 10:33:15.476329  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.812847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.530134  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.531563  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.531568  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.531629  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.531905  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.531906  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.532219  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.576628  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.998278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.676067  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.528654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.776752  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.184194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.876800  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.226252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:15.933198  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.933214  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.934960  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.934971  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.934971  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.935511  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.947643  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.947989  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.948389  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.948671  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.948673  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.948673  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:15.975979  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.478168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.053821  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.054127  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.054669  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.055161  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.056690  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.058058  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.076088  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.481432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.137218  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.150212  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.176277  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.74782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.257981  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.276224  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.640888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.376281  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.739349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.476038  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.391067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.530533  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.531773  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.531801  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.531817  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.532107  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.532107  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.532393  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.576292  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.654158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.677542  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.295183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.728885  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.616029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:16.731059  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.505359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:16.732601  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.097357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:16.776199  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.596266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.876230  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.655175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:16.933399  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.933399  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.935233  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.935242  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.935364  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.935773  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.947876  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.948214  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.948619  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.948850  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.948867  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.948882  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:16.976385  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.86834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.054002  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.054334  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.054853  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.055381  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.056769  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.058264  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.076115  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.569373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.137434  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.150400  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.176513  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.970623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.258245  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.276488  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.85791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.354807  108479 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0919 10:33:17.354850  108479 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0919 10:33:17.354923  108479 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0919 10:33:17.354942  108479 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0919 10:33:17.354948  108479 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0919 10:33:17.354957  108479 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0919 10:33:17.354962  108479 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0919 10:33:17.354997  108479 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0919 10:33:17.355050  108479 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0919 10:33:17.355118  108479 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0919 10:33:17.355154  108479 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-19 10:33:17.355139259 +0000 UTC m=+317.190264062. Adding it to the Taint queue.
I0919 10:33:17.355200  108479 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0919 10:33:17.355197  108479 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"8a99cef4-734c-40fc-acac-2864928e012f", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0919 10:33:17.355236  108479 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"7cdece98-9697-4f36-830c-8b57b1e91c85", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0919 10:33:17.355250  108479 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"0e112392-68d3-445c-8723-bef7eb770978", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0919 10:33:17.357441  108479 httplog.go:90] POST /api/v1/namespaces/default/events: (1.867865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.359267  108479 httplog.go:90] POST /api/v1/namespaces/default/events: (1.402171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.360998  108479 httplog.go:90] POST /api/v1/namespaces/default/events: (1.326148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.362840  108479 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (407.943µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.365991  108479 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.118264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.366258  108479 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-19 10:33:17.362134032 +0000 UTC m=+317.197258827,}] Taint to Node node-2
I0919 10:33:17.366298  108479 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 10:33:17.366645  108479 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 10:33:17.366669  108479 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 10:33:17 +0000 UTC}]
I0919 10:33:17.366737  108479 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2 at 2019-09-19 10:33:17.366725266 +0000 UTC m=+317.201850066 to be fired at 2019-09-19 10:33:17.366725266 +0000 UTC m=+317.201850066
I0919 10:33:17.366772  108479 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2
I0919 10:33:17.366945  108479 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2
I0919 10:33:17.368734  108479 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/events: (1.202681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:17.369273  108479 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/pods/testpod-2: (2.259258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.375708  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.250099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.476410  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.867105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.530733  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.531913  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.531931  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.531934  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.532269  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.532331  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.532586  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.576329  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.769087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.676647  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.129341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.776403  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.892947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.876074  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.518018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:17.933596  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.933621  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.935385  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.935389  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.935488  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.936008  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.948118  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.948399  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.948787  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.949037  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.949074  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.949077  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:17.976208  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.458303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.054231  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.054859  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.055029  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.055594  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.057024  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.058547  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.076040  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.477547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.137634  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.150574  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.176061  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.569669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.258468  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.276328  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.818666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.376151  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.676459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.476262  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.698799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.531064  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.532202  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.532212  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.532249  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.532467  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.532484  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.532793  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.559236  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.43487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:18.561132  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.227644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:18.562566  108479 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (991.892µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:18.575963  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.414878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.676310  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.710878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.776061  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.530274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.876319  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.768154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:18.933765  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.933876  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.935465  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.935550  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.935626  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.936159  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.948291  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.948555  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.948927  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.949298  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.949310  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.949326  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:18.976468  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.916424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.054540  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.054957  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.055199  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.055729  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.057200  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.058743  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.076770  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.254519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.137924  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.150765  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.175969  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.445039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.258774  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.276335  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.841553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.376312  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.856394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.476163  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.634034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.531235  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.532495  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.532640  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.532683  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.532726  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.532767  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.532940  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.576459  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.902193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.675845  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.367298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.780781  108479 httplog.go:90] GET /api/v1/nodes/node-2: (4.276556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.876347  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.866515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:19.934024  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.934023  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.935563  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.935703  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.935750  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.936299  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.948543  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.948734  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.949068  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.949396  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.949403  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.949467  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:19.976437  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.821151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.054732  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.055148  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.055371  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.055822  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.057783  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.058879  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.076071  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.621764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.138205  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.150982  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.176797  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.242724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.259091  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.276260  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.692529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.376313  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.696157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.476283  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.660581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.531582  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.532682  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.532808  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.532840  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.532808  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.533020  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.533141  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.576589  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.02507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.676294  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.743147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.776679  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.050417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.876337  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.810942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:20.934223  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.934275  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.935891  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.935955  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.935968  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.936525  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.948785  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.948884  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.949253  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.949549  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.949645  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.949737  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:20.976224  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.636219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.054820  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.055467  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.055524  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.055942  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.057984  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.059074  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.076250  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.621965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.138579  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.151187  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.176147  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.637647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.259322  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.276434  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.838936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.376490  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.885006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.476262  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.6066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.531878  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.532866  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.533031  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.533036  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.533039  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.533277  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.533293  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.576108  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.575157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.676660  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.912643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.776388  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.66073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.846147  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.52307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:21.847924  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.247389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:21.849605  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.116066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:21.863695  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.734097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.865392  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.281836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.867080  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.230829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.876738  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.740736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:21.934432  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.934469  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.936061  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.936070  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.936070  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.936578  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.949015  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.949023  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.949415  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.949694  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.949887  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.949962  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:21.976120  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.571476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.055015  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.055702  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.055709  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.056100  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.058147  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.059503  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.076279  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.663835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.138797  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.151353  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.176362  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.914376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.259546  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.276307  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.727127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.355436  108479 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000421548s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 10:33:22.355496  108479 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0919 10:33:22.355505  108479 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0919 10:33:22.355519  108479 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0919 10:33:22.358698  108479 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.726381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.359105  108479 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0919 10:33:22.359196  108479 controller_utils.go:124] Update ready status of pods on node [node-0]
I0919 10:33:22.359221  108479 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"8a99cef4-734c-40fc-acac-2864928e012f", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0919 10:33:22.359923  108479 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (520.11µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.360770  108479 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (1.297679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.361094  108479 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.006021395s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 10:33:22.361287  108479 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0919 10:33:22.361358  108479 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0919 10:33:22.361408  108479 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0919 10:33:22.361607  108479 httplog.go:90] POST /api/v1/namespaces/default/events: (1.847096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53020]
I0919 10:33:22.363600  108479 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.762361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.363833  108479 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0919 10:33:22.363845  108479 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.788069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.363864  108479 controller_utils.go:124] Update ready status of pods on node [node-1]
I0919 10:33:22.364012  108479 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"7cdece98-9697-4f36-830c-8b57b1e91c85", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0919 10:33:22.364145  108479 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 10:33:22.359148077 +0000 UTC m=+322.194272882,}] Taint to Node node-0
I0919 10:33:22.364233  108479 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0919 10:33:22.365421  108479 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.369764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.365645  108479 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (407.981µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53020]
I0919 10:33:22.365711  108479 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.010569708s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0919 10:33:22.365746  108479 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0919 10:33:22.365765  108479 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0919 10:33:22.365775  108479 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0919 10:33:22.365917  108479 httplog.go:90] POST /api/v1/namespaces/default/events: (1.702425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.367812  108479 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.819464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53020]
I0919 10:33:22.368022  108479 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0919 10:33:22.368550  108479 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.934181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.368689  108479 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (359.997µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.368959  108479 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (325.859µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53020]
I0919 10:33:22.369155  108479 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 10:33:22.364702688 +0000 UTC m=+322.199827487,}] Taint to Node node-1
I0919 10:33:22.369245  108479 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0919 10:33:22.371569  108479 store.go:362] GuaranteedUpdate of /2827d84e-6ac5-4d5a-a27a-d1a42961103f/minions/node-2 failed because of a conflict, going to retry
I0919 10:33:22.371881  108479 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.146652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.372223  108479 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 10:33:22.368091925 +0000 UTC m=+322.203216724,}] Taint to Node node-2
I0919 10:33:22.373034  108479 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (516.8µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.373461  108479 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.702938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.373682  108479 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 10:33:22.373701  108479 taint_manager.go:438] Updating known taints on node node-2: []
I0919 10:33:22.373721  108479 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0919 10:33:22.373734  108479 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2 at 2019-09-19 10:33:22.373731457 +0000 UTC m=+322.208856254
I0919 10:33:22.375632  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.156032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.376088  108479 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.20427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.376253  108479 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 10:33:22.376268  108479 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 10:33:17 +0000 UTC}]
I0919 10:33:22.376299  108479 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2 at 2019-09-19 10:33:22.376283204 +0000 UTC m=+322.211407999 to be fired at 2019-09-19 10:33:22.376283204 +0000 UTC m=+322.211407999
I0919 10:33:22.376331  108479 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2
I0919 10:33:22.376391  108479 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-19 10:33:12 +0000 UTC,}] Taint
I0919 10:33:22.376499  108479 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2
I0919 10:33:22.377728  108479 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/pods/testpod-2: (1.174491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:22.378548  108479 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/events/testpod-2.15c5d069a4feae5b: (1.782802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.476133  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.575502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.532061  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.533222  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.533404  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.533426  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.533367  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.533390  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.533536  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.576225  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.702936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.676247  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.655081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.776455  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.892101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.876355  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.751792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:22.934564  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.934668  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.936283  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.936295  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.936283  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.936708  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.949260  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.949256  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.949608  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.949872  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.950020  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.950209  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:22.976375  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.866568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.055209  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.055956  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.056131  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.056299  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.058362  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.059678  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.076136  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.603361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.139033  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.151713  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.176495  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.984879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.259757  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.276613  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.013849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.376260  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.692609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.476405  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.752204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.532527  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.533564  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.533588  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.533619  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.533747  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.533830  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.533831  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.577033  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.856052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.676305  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.8075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.776435  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.903397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.876432  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.908756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:23.934823  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.934832  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.936460  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.936460  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.936460  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.936925  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.949452  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.949466  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.949762  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.950003  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.950200  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.950363  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:23.976106  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.608212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.055375  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.056224  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.056371  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.056600  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.058506  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.059820  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.076121  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.554948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.139211  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.151907  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.176499  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.913057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.259975  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.276390  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.717729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.376112  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.571003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.476246  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.679599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.532756  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.533702  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.533726  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.533785  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.533925  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.533938  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.533987  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.577006  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.433314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.676569  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.999692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.776044  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.528533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.877057  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.36882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:24.935099  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.935100  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.936631  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.936659  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.936661  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.937108  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.949727  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.949725  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.950134  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.950163  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.950401  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.950557  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:24.976327  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.8034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.055525  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.056438  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.056537  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.056755  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.058694  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.060005  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.076045  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.562597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.139403  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.152085  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.176404  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.833799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.260186  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.276829  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.267092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.376243  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.631699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.418619  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.272711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0919 10:33:25.420299  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.194709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0919 10:33:25.422107  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.38955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0919 10:33:25.476216  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.670601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.532942  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.533885  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.533905  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.533889  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.534081  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.534108  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.534222  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.576466  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.915029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.676289  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.775055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.776333  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.786069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.876470  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.891712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:25.935312  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.935314  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.936786  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.936794  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.936833  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.937236  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.950024  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.950031  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.950417  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.950425  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.950520  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.950739  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:25.976510  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.944247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.055924  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.056627  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.056697  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.056997  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.058853  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.060244  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.076503  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.969092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.139609  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.152342  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.176445  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.8969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.260518  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.276403  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.779455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.376220  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.677831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.476276  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.76574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.533251  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.534053  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.534054  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.534063  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.534234  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.534378  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.534295  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.576215  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.661048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.676391  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.800678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.729048  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.622007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:26.731064  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.374843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:26.732642  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.189864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:26.776091  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.587847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.876492  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.886654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:26.935595  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.935595  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.936963  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.936976  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.936999  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.937378  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.950246  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.950247  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.950604  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.950640  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.950738  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.950929  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:26.976136  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.633323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:27.056099  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.056817  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.056900  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.057130  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.058992  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.060476  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.075967  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.429981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:27.139821  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.152543  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.176418  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.848259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:27.260745  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.276476  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.797064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:27.374072  108479 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.0189947s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 10:33:27.374134  108479 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.01906455s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:27.374155  108479 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.019087554s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:27.374187  108479 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.01911956s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:27.374259  108479 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-19 10:33:27.37424101 +0000 UTC m=+327.209365812. Adding it to the Taint queue.
I0919 10:33:27.374291  108479 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.019155907s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 10:33:27.374303  108479 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.019168079s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:27.374317  108479 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.019182454s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:27.374332  108479 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.019197667s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:27.375328  108479 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (756.07µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:27.376754  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.18022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52988]
I0919 10:33:27.379788  108479 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.817424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:27.380413  108479 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 10:33:27.380455  108479 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-19 10:33:17 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-19 10:33:27 +0000 UTC}]
I0919 10:33:27.380495  108479 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2 at 2019-09-19 10:33:27.380482751 +0000 UTC m=+327.215607551 to be fired at 2019-09-19 10:33:27.380482751 +0000 UTC m=+327.215607551
W0919 10:33:27.380508  108479 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2}. Skipping.
I0919 10:33:27.380615  108479 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-19 10:33:27.374354002 +0000 UTC m=+327.209479102,}] Taint to Node node-2
I0919 10:33:27.381352  108479 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (528.033µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:27.384469  108479 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.22759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:27.384875  108479 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0919 10:33:27.384904  108479 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0919 10:33:27.384929  108479 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-19 10:33:27 +0000 UTC}]
I0919 10:33:27.384959  108479 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2 at 2019-09-19 10:33:27.384945802 +0000 UTC m=+327.220070604 to be fired at 2019-09-19 10:38:27.384945802 +0000 UTC m=+627.220070604
W0919 10:33:27.384972  108479 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsf0849da6-26d9-4840-b19a-f553e6867734/testpod-2}. Skipping.
I0919 10:33:27.384978  108479 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.029970576s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 10:33:27.385044  108479 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.030037914s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:27.385102  108479 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.030096455s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:27.385154  108479 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.030148112s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:27.385315  108479 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-19 10:33:27.385296831 +0000 UTC m=+327.220421636. Adding it to the Taint queue.
I0919 10:33:27.476320  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.796911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:27.533475  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.534207  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.534221  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.534319  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.534570  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.534586  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.534574  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.576481  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.844131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:27.676413  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.801035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:27.776684  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.066924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:27.876340  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.737969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:27.935793  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.935850  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.937150  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.937159  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.937166  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.937557  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.950471  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.950471  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.950810  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.950853  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.950854  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.951102  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:27.976591  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.867151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.056332  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.056987  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.057085  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.057444  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.059151  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.060719  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.076601  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.721901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.140033  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.152740  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.176284  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.755955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.261007  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.276237  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.681421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.376470  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.853849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.477957  108479 httplog.go:90] GET /api/v1/nodes/node-2: (3.360029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.533678  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.534449  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.534508  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.534555  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.534734  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.534760  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.534809  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.576402  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.84041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.675985  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.447921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.776220  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.591336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.876277  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.749762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:28.936005  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.936064  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.937396  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.937400  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.937418  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.937791  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.950638  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.950637  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.950959  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.950963  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.951031  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.951198  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:28.976440  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.834802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.056501  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.057154  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.057264  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.057600  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.059331  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.060942  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.076271  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.701912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.140280  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.152936  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.176390  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.812011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.261289  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.276226  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.672905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.376078  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.605514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.476410  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.831816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.533898  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.534601  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.534663  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.534687  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.534922  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.534955  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.534960  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.576346  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.685691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.676572  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.900906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.776689  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.09343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.876821  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.23628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:29.936461  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.936461  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.937538  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.937552  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.937552  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.937974  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.950737  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.950786  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.951088  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.951147  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.951213  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.951403  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:29.975974  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.515381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.056667  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.057367  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.057460  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.057839  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.059565  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.061220  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.076291  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.776231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.140459  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.153221  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.176077  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.612893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.261794  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.276715  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.1959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.377207  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.581617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.476413  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.889628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.534079  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.534758  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.534833  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.534936  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.535071  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.535087  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.535073  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.576739  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.057977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.676326  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.782655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.776454  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.827011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.876751  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.863507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:30.936662  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.936730  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.937762  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.937764  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.937859  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.938131  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.950965  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.951489  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.951540  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.951567  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.951596  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.951677  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:30.976361  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.761415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.056858  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.057539  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.057636  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.058016  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.059754  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.061390  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.076637  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.13864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.140650  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.153416  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.176279  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.690278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.261994  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.276636  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.143098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.376000  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.547917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.476123  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.615132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.534272  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.535020  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.535022  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.535196  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.535213  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.535411  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.535748  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.576270  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.740516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.676276  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.718919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.776298  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.747962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.845918  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.236091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:31.847439  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.085515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:31.848924  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.018489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55422]
I0919 10:33:31.863428  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.371869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.865036  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.163776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.866720  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.231592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.875649  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.168549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:31.936854  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.936850  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.937919  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.937928  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.937943  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.938294  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.951145  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.951866  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.951871  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.952045  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.952080  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.952097  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:31.976358  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.794206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.057029  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.057721  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.057818  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.058097  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.060060  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.061588  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.076246  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.621823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.140847  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.153578  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.176364  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.785039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.262226  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.276410  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.877045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.376523  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.896321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.385595  108479 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.030581638s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 10:33:32.385860  108479 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.030831347s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:32.385965  108479 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.030960222s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:32.386076  108479 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.03107149s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:32.386281  108479 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.031212815s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 10:33:32.386390  108479 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.031320591s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:32.386490  108479 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.031421643s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:32.386580  108479 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.031511577s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:32.386722  108479 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.031585679s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0919 10:33:32.386797  108479 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.031661419s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:32.386881  108479 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.031744794s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:32.386950  108479 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.031813871s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-19 10:33:12 +0000 UTC,LastTransitionTime:2019-09-19 10:33:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0919 10:33:32.387056  108479 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-19 10:33:32.387036331 +0000 UTC m=+332.222161137. Adding it to the Taint queue.
I0919 10:33:32.476204  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.620129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.534571  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.535238  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.535248  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.535387  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.535409  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.535552  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.535959  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.576246  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.69718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.676422  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.80896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.776933  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.345695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.876491  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.954119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:32.937068  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.937068  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.938081  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.938279  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.938347  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.938486  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.951434  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.952036  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.952037  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.952295  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.952328  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.952388  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:32.976874  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.323346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.057203  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.057970  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.058022  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.058340  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.060272  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.061814  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.076464  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.768669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.141053  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.153707  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.176456  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.830915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.262627  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.275997  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.407254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.376245  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.735516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.435607  108479 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.517836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:33.437517  108479 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.344057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:33.438967  108479 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.030298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36182]
I0919 10:33:33.476260  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.717131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.534783  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.535408  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.535414  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.535589  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.535604  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.535733  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.536122  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.576409  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.882492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.676351  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.773376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.776628  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.926343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.876509  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.919926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:33.937488  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.937587  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.938217  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.938409  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.938577  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.938690  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.951631  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.952235  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.952238  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.952501  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.952525  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.952502  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:33.976891  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.325954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.057384  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.058232  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.058244  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.058464  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.060408  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.062351  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.076087  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.560094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.141385  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.153898  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.176484  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.791692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.262843  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.276230  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.629774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.376511  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.903528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.476226  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.644974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.534954  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.535535  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.535571  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.535833  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.535857  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.535868  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.536345  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.576399  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.848367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.676094  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.501683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.776122  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.567538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.876384  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.812618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:34.937720  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.937725  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.938397  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.938567  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.938731  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.938865  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.951814  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.952506  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.952511  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.952672  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.952693  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.952672  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:34.976061  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.563901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:35.057567  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:35.058499  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:35.058545  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:35.058634  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:35.060559  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:35.062520  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:35.076530  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.859589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:35.141782  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:35.154075  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:35.176229  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.705225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:35.263082  108479 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0919 10:33:35.276846  108479 httplog.go:90] GET /api/v1/nodes/node-2: (2.210593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:35.376166  108479 httplog.go:90] GET /api/v1/nodes/node-2: (1.547437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52934]
I0919 10:33:35.419057  108479 httplog.go:90] GET /api/v1/namespaces/default: (1.541737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0919 10:33:35.420856  108479 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.233778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0919 10:33:35.422432  108479 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.171663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0919 10:33:35.476661  108479 httplog.