This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 8 failed / 2857 succeeded
Started2019-09-20 04:01
Elapsed28m14s
Revision
Buildergke-prow-ssd-pool-1a225945-p7vq
Refs master:db1f8da0
82703:b4cf6428
pod43cca569-db5b-11e9-8563-d28bdca8a776
infra-commit5a67b1fcf
pod43cca569-db5b-11e9-8563-d28bdca8a776
repok8s.io/kubernetes
repo-commitefb8d077b070be781b9eb13c9d833fcba0b9a5da
repos{u'k8s.io/kubernetes': u'master:db1f8da036428636a710a9081a5fc18ba30c6ef0,82703:b4cf642803a460ff73fae1bd3d6d1287e16beef6'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m20s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0920 04:26:58.605737  108327 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (140.27s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-041605.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0920 04:28:08.804952  108327 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 04:28:08.805104  108327 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 04:28:08.805224  108327 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 04:28:08.805309  108327 master.go:259] Using reconciler: 
I0920 04:28:08.806865  108327 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.807173  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.807347  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.808212  108327 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 04:28:08.808360  108327 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.808260  108327 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 04:28:08.808800  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.808842  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.809710  108327 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:28:08.809808  108327 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:28:08.809865  108327 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.809988  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.810186  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.810209  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.810801  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.811561  108327 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 04:28:08.811636  108327 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 04:28:08.811840  108327 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.812046  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.812119  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.812618  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.812997  108327 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 04:28:08.813042  108327 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 04:28:08.813496  108327 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.813673  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.813738  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.814141  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.814511  108327 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 04:28:08.814698  108327 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 04:28:08.814690  108327 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.814891  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.814963  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.815857  108327 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 04:28:08.816069  108327 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.816120  108327 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 04:28:08.816203  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.816228  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.816597  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.817182  108327 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 04:28:08.817284  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.817445  108327 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 04:28:08.817670  108327 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.817852  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.817873  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.818622  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.818824  108327 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 04:28:08.818863  108327 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 04:28:08.819036  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.819306  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.819334  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.819951  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.820280  108327 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 04:28:08.820359  108327 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 04:28:08.820880  108327 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.821299  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.821894  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.821932  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.822793  108327 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 04:28:08.822861  108327 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 04:28:08.823063  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.823340  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.823377  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.823758  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.824585  108327 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 04:28:08.824737  108327 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 04:28:08.824742  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.825001  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.825021  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.825675  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.825718  108327 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 04:28:08.825865  108327 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.825971  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.825993  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.826053  108327 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 04:28:08.826682  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.826921  108327 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 04:28:08.826978  108327 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 04:28:08.827166  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.827343  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.827363  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.827808  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.828096  108327 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 04:28:08.828134  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.828191  108327 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 04:28:08.828341  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.828361  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.829152  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.829176  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.829965  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.830262  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.830483  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.830515  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.831124  108327 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 04:28:08.831160  108327 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 04:28:08.831219  108327 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 04:28:08.831729  108327 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.831898  108327 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.832828  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.833012  108327 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.833544  108327 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.834152  108327 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.834944  108327 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.835331  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.835483  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.835652  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.836004  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.836473  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.836678  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.837269  108327 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.837495  108327 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.838038  108327 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.838226  108327 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.838810  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.838961  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.839051  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.839166  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.839286  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.839443  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.839614  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.840180  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.840363  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.841133  108327 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.841880  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.842082  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.842271  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.842865  108327 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.843079  108327 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.843739  108327 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.844218  108327 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.844780  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.845424  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.845789  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.845884  108327 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 04:28:08.845900  108327 master.go:461] Enabling API group "authentication.k8s.io".
I0920 04:28:08.845912  108327 master.go:461] Enabling API group "authorization.k8s.io".
I0920 04:28:08.846073  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.846296  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.846329  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.847104  108327 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:28:08.847161  108327 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:28:08.847344  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.847504  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.847531  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.848266  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.848452  108327 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:28:08.848514  108327 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:28:08.848631  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.848760  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.848781  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.849331  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.850024  108327 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:28:08.850073  108327 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:28:08.850138  108327 master.go:461] Enabling API group "autoscaling".
I0920 04:28:08.850783  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.850291  108327 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.851039  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.851058  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.851834  108327 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 04:28:08.851914  108327 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 04:28:08.852098  108327 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.852618  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.852655  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.852734  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.853538  108327 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 04:28:08.853565  108327 master.go:461] Enabling API group "batch".
I0920 04:28:08.853622  108327 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 04:28:08.853872  108327 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.854052  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.854078  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.854781  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.854935  108327 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 04:28:08.854962  108327 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 04:28:08.854987  108327 master.go:461] Enabling API group "certificates.k8s.io".
I0920 04:28:08.855230  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.855358  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.855383  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.855807  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.856099  108327 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:28:08.856165  108327 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:28:08.856304  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.856465  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.856500  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.856878  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.857008  108327 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:28:08.857028  108327 master.go:461] Enabling API group "coordination.k8s.io".
I0920 04:28:08.857043  108327 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 04:28:08.857068  108327 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:28:08.857464  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.857585  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.857598  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.857645  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.858449  108327 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:28:08.858491  108327 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:28:08.858696  108327 master.go:461] Enabling API group "extensions".
I0920 04:28:08.858967  108327 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.859080  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.859106  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.859429  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.859687  108327 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 04:28:08.859835  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.859892  108327 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 04:28:08.859939  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.859955  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.860531  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.860660  108327 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:28:08.860693  108327 master.go:461] Enabling API group "networking.k8s.io".
I0920 04:28:08.860725  108327 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:28:08.860732  108327 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.860868  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.860886  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.861485  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.861897  108327 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 04:28:08.861917  108327 master.go:461] Enabling API group "node.k8s.io".
I0920 04:28:08.861976  108327 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 04:28:08.862082  108327 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.862202  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.862278  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.862727  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.863136  108327 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 04:28:08.863218  108327 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 04:28:08.863296  108327 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.863481  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.863513  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.864071  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.864185  108327 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 04:28:08.864206  108327 master.go:461] Enabling API group "policy".
I0920 04:28:08.864233  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.864264  108327 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 04:28:08.864375  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.864408  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.865108  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:08.865171  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.865199  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:08.865275  108327 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:28:08.865295  108327 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:28:08.865483  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:08.865496  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.865630  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.865653  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.866521  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.866618  108327 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:28:08.866663  108327 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:28:08.866658  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.866780  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.866807  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.867041  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:08.867069  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:08.867219  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:08.867666  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.867734  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:08.868474  108327 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:28:08.868523  108327 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:28:08.868690  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.868965  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.868996  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.869202  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.870476  108327 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:28:08.870614  108327 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:28:08.870828  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.871136  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.871199  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.871348  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.872672  108327 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:28:08.872747  108327 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:28:08.872901  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.873607  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.873621  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.873650  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.874373  108327 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:28:08.874450  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.874527  108327 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:28:08.874598  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.874625  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.875370  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.876042  108327 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:28:08.876115  108327 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:28:08.876268  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.876445  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.876467  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.877134  108327 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:28:08.877097  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.877191  108327 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:28:08.877459  108327 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 04:28:08.877938  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.880129  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.880266  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.880279  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.881094  108327 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:28:08.881130  108327 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:28:08.881361  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.881618  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.881694  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.882224  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.882568  108327 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:28:08.882593  108327 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 04:28:08.882685  108327 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:28:08.882686  108327 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 04:28:08.882990  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.883139  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.883164  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.883525  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.884347  108327 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:28:08.884490  108327 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:28:08.884561  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.884709  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.884734  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.885483  108327 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:28:08.885524  108327 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.885556  108327 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:28:08.885652  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.885672  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.885784  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.886264  108327 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 04:28:08.886332  108327 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.886429  108327 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 04:28:08.886480  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.886495  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.886803  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.887334  108327 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 04:28:08.887408  108327 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 04:28:08.887716  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.887744  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.887939  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.887977  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.888531  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.888803  108327 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:28:08.888827  108327 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:28:08.889056  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.889251  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.889279  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.889496  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.889921  108327 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:28:08.889950  108327 master.go:461] Enabling API group "storage.k8s.io".
I0920 04:28:08.889979  108327 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:28:08.890187  108327 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.890458  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.890491  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.891197  108327 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 04:28:08.891254  108327 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 04:28:08.891428  108327 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.891565  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.891620  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.891882  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.892424  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.893211  108327 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 04:28:08.893257  108327 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 04:28:08.893435  108327 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.893574  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.893594  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.894350  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.894460  108327 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 04:28:08.894486  108327 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 04:28:08.894737  108327 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.894984  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.895006  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.895631  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.895862  108327 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 04:28:08.895898  108327 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 04:28:08.896095  108327 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.896217  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.896234  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.896657  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.896955  108327 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 04:28:08.896976  108327 master.go:461] Enabling API group "apps".
I0920 04:28:08.896983  108327 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 04:28:08.897012  108327 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.897116  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.897128  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.897871  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.898116  108327 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 04:28:08.898155  108327 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.898173  108327 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 04:28:08.898249  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.898263  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.898711  108327 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 04:28:08.898753  108327 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.898782  108327 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 04:28:08.898830  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.898841  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.899115  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.899620  108327 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 04:28:08.899625  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.899659  108327 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 04:28:08.899661  108327 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.899992  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.900013  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.900287  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.900850  108327 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 04:28:08.900877  108327 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 04:28:08.900879  108327 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 04:28:08.900921  108327 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.901337  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:08.901359  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:08.901540  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.901912  108327 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:28:08.901934  108327 master.go:461] Enabling API group "events.k8s.io".
I0920 04:28:08.901958  108327 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:28:08.902332  108327 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.902603  108327 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.902822  108327 watch_cache.go:405] Replace watchCache (rev: 59612) 
I0920 04:28:08.902918  108327 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.902999  108327 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.903081  108327 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.903211  108327 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.903419  108327 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.903516  108327 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.903593  108327 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.903713  108327 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.904522  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.904931  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.905691  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.905918  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.906605  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.906860  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.907703  108327 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.908015  108327 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.908673  108327 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.908891  108327 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:28:08.908940  108327 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 04:28:08.909449  108327 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.909588  108327 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.909751  108327 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.910540  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.911220  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.911983  108327 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.912239  108327 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.913224  108327 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.913792  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.914033  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.914641  108327 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:28:08.914708  108327 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 04:28:08.915434  108327 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.915845  108327 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.916419  108327 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.916925  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.917291  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.917915  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.918553  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.919126  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.919656  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.920325  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.920869  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:28:08.920922  108327 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0920 04:28:08.921615  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.922107  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:28:08.922183  108327 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0920 04:28:08.922651  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.923120  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.923341  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.923863  108327 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.924256  108327 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.924770  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.925323  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:28:08.925379  108327 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0920 04:28:08.926145  108327 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.926723  108327 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.926938  108327 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.927683  108327 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.927868  108327 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.928087  108327 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.928714  108327 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.928965  108327 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.929195  108327 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.930035  108327 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.930318  108327 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.930576  108327 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:28:08.930631  108327 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0920 04:28:08.930637  108327 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0920 04:28:08.931245  108327 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.931764  108327 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.932526  108327 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.933081  108327 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.933831  108327 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"79daed50-3f61-49ed-b9a7-7aec41623006", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:28:08.937067  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:08.937094  108327 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0920 04:28:08.937108  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:08.937118  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:08.937126  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:08.937133  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:08.937172  108327 httplog.go:90] GET /healthz: (190.651µs) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:08.938505  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.536484ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37690]
I0920 04:28:08.940935  108327 httplog.go:90] GET /api/v1/services: (1.082372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37690]
I0920 04:28:08.944705  108327 httplog.go:90] GET /api/v1/services: (953.154µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37690]
I0920 04:28:08.946713  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:08.946745  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:08.946755  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:08.946761  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:08.946769  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:08.946821  108327 httplog.go:90] GET /healthz: (151.873µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:08.947839  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.227102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37690]
I0920 04:28:08.948685  108327 httplog.go:90] GET /api/v1/services: (803.347µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37692]
I0920 04:28:08.949141  108327 httplog.go:90] GET /api/v1/services: (1.926133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:08.949718  108327 httplog.go:90] POST /api/v1/namespaces: (1.39551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37690]
I0920 04:28:08.950918  108327 httplog.go:90] GET /api/v1/namespaces/kube-public: (739.464µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:08.952723  108327 httplog.go:90] POST /api/v1/namespaces: (1.491436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:08.954046  108327 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (987.177µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:08.955918  108327 httplog.go:90] POST /api/v1/namespaces: (1.377089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.038048  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.038086  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.038118  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.038124  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.038130  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.038167  108327 httplog.go:90] GET /healthz: (269.86µs) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:09.047626  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.047664  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.047685  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.047691  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.047697  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.047723  108327 httplog.go:90] GET /healthz: (283.791µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.138232  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.138274  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.138288  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.138298  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.138306  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.138348  108327 httplog.go:90] GET /healthz: (308.284µs) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:09.147651  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.147682  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.147691  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.147698  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.147703  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.147752  108327 httplog.go:90] GET /healthz: (248.553µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.238150  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.238380  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.238993  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.239148  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.239221  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.239429  108327 httplog.go:90] GET /healthz: (1.409637ms) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:09.247575  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.247615  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.247628  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.247636  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.247643  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.247688  108327 httplog.go:90] GET /healthz: (325.66µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.292233  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.292273  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.292234  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.292321  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.292304  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.292316  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.338101  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.338140  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.338151  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.338161  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.338170  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.338205  108327 httplog.go:90] GET /healthz: (265.019µs) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:09.347656  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.347692  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.347703  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.347711  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.347717  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.347748  108327 httplog.go:90] GET /healthz: (269.857µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.364011  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.364303  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.364330  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.365057  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.365214  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.365914  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.438056  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.438121  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.438131  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.438137  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.438144  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.438177  108327 httplog.go:90] GET /healthz: (262.158µs) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:09.447565  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.447601  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.447611  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.447617  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.447623  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.447680  108327 httplog.go:90] GET /healthz: (258.605µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.499588  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.538219  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.538257  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.538267  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.538274  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.538279  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.538315  108327 httplog.go:90] GET /healthz: (265.431µs) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:09.547682  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.547719  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.547729  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.547736  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.547742  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.547781  108327 httplog.go:90] GET /healthz: (299.145µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.568989  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.638238  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.638272  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.638282  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.638288  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.638294  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.638329  108327 httplog.go:90] GET /healthz: (295.803µs) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:09.648414  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.648607  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.648637  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.648679  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.648700  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.648838  108327 httplog.go:90] GET /healthz: (1.32835ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.738161  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.738377  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.738480  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.738529  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.738580  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.738873  108327 httplog.go:90] GET /healthz: (869.072µs) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:09.747534  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:28:09.747574  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.747587  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.747597  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.747605  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.747642  108327 httplog.go:90] GET /healthz: (291.697µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.804808  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:28:09.804910  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:28:09.839264  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.839322  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.839330  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.839338  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.839385  108327 httplog.go:90] GET /healthz: (1.323588ms) 0 [Go-http-client/1.1 127.0.0.1:37688]
I0920 04:28:09.848649  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.848828  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.848858  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.848903  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.849089  108327 httplog.go:90] GET /healthz: (1.645563ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.865318  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.865318  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.865624  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.867220  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.867225  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.867376  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.867922  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:09.938290  108327 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.297643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37692]
I0920 04:28:09.938354  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.34581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.938856  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.938881  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:28:09.938891  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:28:09.938898  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:28:09.938934  108327 httplog.go:90] GET /healthz: (922.949µs) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:09.939025  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.571546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:09.939584  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (860.494µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.940448  108327 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (888.008µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:09.940630  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (721.012µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.941137  108327 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.392568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.941382  108327 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0920 04:28:09.942182  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.127158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.942376  108327 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.461231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:09.942951  108327 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.306452ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.943298  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (709.263µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37688]
I0920 04:28:09.944688  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (965.081µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:09.944987  108327 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.368312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.945185  108327 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0920 04:28:09.945206  108327 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0920 04:28:09.945669  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (643.463µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:09.946747  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (706.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:09.947770  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (688.983µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:09.947845  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:09.947873  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:09.947903  108327 httplog.go:90] GET /healthz: (688.928µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.948829  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (698.224µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.950501  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.269653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.950803  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0920 04:28:09.951881  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (832.126µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.953771  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.308365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.953987  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0920 04:28:09.954820  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (672.096µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.956556  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.329695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.956755  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0920 04:28:09.957660  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (732.835µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.959140  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.03687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.959448  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0920 04:28:09.960622  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (893.488µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.962458  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.363827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.962619  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0920 04:28:09.963658  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (858.632µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.965650  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.476435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.965816  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0920 04:28:09.966722  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (731.094µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.968318  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.229091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.968676  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0920 04:28:09.969786  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (831.614µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.971367  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.196005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.971597  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0920 04:28:09.972687  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (870.193µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.974665  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.513291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.974905  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0920 04:28:09.975834  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (737.326µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.977804  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.610592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.978030  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0920 04:28:09.978912  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (713.18µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.980646  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.292082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.980942  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0920 04:28:09.982058  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (775.491µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.984432  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.76664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.984711  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0920 04:28:09.985633  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (739.241µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.987483  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.351852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.987767  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0920 04:28:09.988840  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (892.456µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.990297  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.11332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.990527  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0920 04:28:09.991235  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (569.869µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.992843  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.143198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.993095  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0920 04:28:09.994043  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (736.779µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.995954  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.312616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.996231  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0920 04:28:09.997403  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (930.668µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:09.999427  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.533656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.001263  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0920 04:28:10.002479  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (896.795µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.004486  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.552509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.004743  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 04:28:10.005721  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (815.877µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.007562  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.393143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.007818  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0920 04:28:10.008748  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (775.744µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.010719  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.517809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.011032  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0920 04:28:10.012138  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (875.051µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.014293  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.658736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.014570  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0920 04:28:10.015711  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (878.209µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.017538  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.017770  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0920 04:28:10.018848  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (861.589µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.020647  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.341299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.020912  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0920 04:28:10.021985  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (809.021µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.023769  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.38496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.023962  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0920 04:28:10.025033  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (792.187µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.026903  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.446519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.027120  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0920 04:28:10.028111  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (737.895µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.029911  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.379553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.030248  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0920 04:28:10.031250  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (776.917µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.033003  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.317888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.033195  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0920 04:28:10.034169  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (801.957µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.035817  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.16346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.036104  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 04:28:10.037248  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (767.207µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.038428  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.038459  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.038505  108327 httplog.go:90] GET /healthz: (745.606µs) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:10.039185  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.452812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.039363  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 04:28:10.040260  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (646.555µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.041871  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.227889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.042147  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 04:28:10.043141  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (820.092µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.044998  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.397847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.045380  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 04:28:10.046497  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (792.334µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.048481  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.048512  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.048523  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.548218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.048546  108327 httplog.go:90] GET /healthz: (1.198975ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.048683  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 04:28:10.049764  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (892.387µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.051884  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.540575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.052224  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 04:28:10.053477  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (822.832µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.055367  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.395698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.055734  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 04:28:10.056902  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (849.644µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.058900  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.500045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.059209  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 04:28:10.060247  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (852.695µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.062212  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.419954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.062540  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 04:28:10.063614  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (837.734µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.065615  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.630855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.065919  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 04:28:10.067088  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (840.45µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.069131  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.655295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.069329  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0920 04:28:10.070410  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (799.498µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.072172  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.310936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.072427  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 04:28:10.073587  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (943.95µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.075878  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.816281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.076204  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0920 04:28:10.077291  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (795.956µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.079179  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.428996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.079524  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 04:28:10.080713  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (946.841µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.082514  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.443224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.082769  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 04:28:10.083809  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (885.123µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.085588  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.309158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.085791  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 04:28:10.086684  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (721.955µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.088315  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.211987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.088562  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 04:28:10.089468  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (716.34µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.091483  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.505273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.091770  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 04:28:10.092767  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (747.216µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.094379  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.094666  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0920 04:28:10.095670  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (732.61µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.097445  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.325586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.097686  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 04:28:10.098619  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (743.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.100352  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.338029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.100609  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0920 04:28:10.101578  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (757.676µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.103422  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.415111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.103848  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 04:28:10.104830  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (723.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.119050  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.926557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.119334  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 04:28:10.138779  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.554822ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.138782  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.138872  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.138904  108327 httplog.go:90] GET /healthz: (995.722µs) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:10.148168  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.148202  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.148239  108327 httplog.go:90] GET /healthz: (939.492µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.159145  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.98139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.159435  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 04:28:10.178340  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.22479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.199744  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.505442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.200060  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 04:28:10.218470  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.275733ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.238998  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.239258  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.239440  108327 httplog.go:90] GET /healthz: (1.511008ms) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:10.239783  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.543702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.240006  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 04:28:10.248449  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.248508  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.248582  108327 httplog.go:90] GET /healthz: (1.224449ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.258521  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.379074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.279453  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.197075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.279888  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0920 04:28:10.292489  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.292524  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.292528  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.292477  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.292495  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.292513  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.298621  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.380075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.319228  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.047698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.319532  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0920 04:28:10.338894  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.63227ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.339047  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.339080  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.339132  108327 httplog.go:90] GET /healthz: (1.180472ms) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:10.348658  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.348803  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.348924  108327 httplog.go:90] GET /healthz: (1.443117ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.359135  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.985808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.359574  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0920 04:28:10.364222  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.364451  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.364463  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.365259  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.365502  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.366072  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.379025  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.610883ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.399730  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.528979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.400021  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0920 04:28:10.418962  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.687398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.439243  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.439280  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.439323  108327 httplog.go:90] GET /healthz: (1.467953ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:10.439536  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.212363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.439756  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0920 04:28:10.448611  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.448644  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.448689  108327 httplog.go:90] GET /healthz: (1.329148ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.458752  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.543218ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.479554  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.28434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.479856  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 04:28:10.498708  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.473472ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.499754  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.519283  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.028484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.519661  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0920 04:28:10.538764  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.477178ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.538817  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.539301  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.539366  108327 httplog.go:90] GET /healthz: (1.452307ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:10.548138  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.548168  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.548218  108327 httplog.go:90] GET /healthz: (931.808µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.558946  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.764758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.559375  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0920 04:28:10.569191  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.578680  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.433936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.600338  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.999887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.600656  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0920 04:28:10.618588  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.372959ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.638937  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.638977  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.639018  108327 httplog.go:90] GET /healthz: (1.071857ms) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:10.639325  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.031416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.639538  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0920 04:28:10.648274  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.648304  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.648339  108327 httplog.go:90] GET /healthz: (1.028333ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.658502  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.407229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.679366  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.131012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.679771  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 04:28:10.698799  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.496307ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.719673  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.450528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.719933  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 04:28:10.738657  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.738692  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.738734  108327 httplog.go:90] GET /healthz: (849.016µs) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:10.738951  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.715385ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.748553  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.748731  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.748860  108327 httplog.go:90] GET /healthz: (1.424162ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.759375  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.156333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.759665  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 04:28:10.778683  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.420401ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.799426  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.158946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.799759  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 04:28:10.818822  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.586822ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.838743  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.838773  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.838804  108327 httplog.go:90] GET /healthz: (879.689µs) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:10.839580  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.364373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.839811  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 04:28:10.848686  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.848724  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.849021  108327 httplog.go:90] GET /healthz: (1.547717ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.858583  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.548239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.865563  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.865576  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.865769  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.867508  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.867511  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.867558  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.868100  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:10.879172  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.953948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.879564  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 04:28:10.898588  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.398456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.919529  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.253334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.919824  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 04:28:10.938490  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.318852ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:10.938861  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.938892  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.939030  108327 httplog.go:90] GET /healthz: (1.124736ms) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:10.948384  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:10.948449  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:10.948550  108327 httplog.go:90] GET /healthz: (1.208677ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.959276  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.070282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.959753  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 04:28:10.978441  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.251678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.999193  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.053238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:10.999499  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 04:28:11.018544  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.324062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.038715  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.038878  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.038979  108327 httplog.go:90] GET /healthz: (1.145501ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:11.039474  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.146044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.039760  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 04:28:11.048514  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.048548  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.048623  108327 httplog.go:90] GET /healthz: (1.259514ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.058667  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.53784ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.079955  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.63413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.080227  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0920 04:28:11.098649  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.439541ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.119489  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.240117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.119737  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 04:28:11.138824  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.582728ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.139285  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.139310  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.139349  108327 httplog.go:90] GET /healthz: (1.093817ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:11.148558  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.148598  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.148648  108327 httplog.go:90] GET /healthz: (1.223985ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.159636  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.348178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.159979  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0920 04:28:11.178701  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.482637ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.199593  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.305943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.199860  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 04:28:11.218938  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.667119ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.238903  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.238933  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.238975  108327 httplog.go:90] GET /healthz: (1.104275ms) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:11.239609  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.390527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.239859  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 04:28:11.248509  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.248542  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.248575  108327 httplog.go:90] GET /healthz: (1.210809ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.258522  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.324647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.279260  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.049775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.279597  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 04:28:11.292714  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.292742  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.292742  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.292721  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.292720  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.292718  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.298800  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.506111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.319283  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.071142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.319581  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 04:28:11.338736  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.501757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.338834  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.338853  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.338900  108327 httplog.go:90] GET /healthz: (956.188µs) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:11.348645  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.348687  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.348739  108327 httplog.go:90] GET /healthz: (1.247975ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.359462  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.161802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.359731  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 04:28:11.364482  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.364611  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.364621  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.365554  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.365685  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.366255  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.378668  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.453502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.399333  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.043315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.399788  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0920 04:28:11.418896  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.6173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.438878  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.439059  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.843703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.439105  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.439274  108327 httplog.go:90] GET /healthz: (1.398677ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:11.439431  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 04:28:11.448514  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.448542  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.448585  108327 httplog.go:90] GET /healthz: (1.152092ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.458667  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.498597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.479340  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.090315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.479632  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0920 04:28:11.498527  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.348177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.499909  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.519381  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.19681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.519859  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 04:28:11.538779  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.538813  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.538853  108327 httplog.go:90] GET /healthz: (958.755µs) 0 [Go-http-client/1.1 127.0.0.1:37696]
I0920 04:28:11.538869  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.610135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:11.548843  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.548905  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.548962  108327 httplog.go:90] GET /healthz: (1.530152ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.559300  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.088987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.559571  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 04:28:11.569526  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.578727  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.410318ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.599584  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.219497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.600133  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 04:28:11.618705  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.535736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.638893  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.639145  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.639357  108327 httplog.go:90] GET /healthz: (1.496565ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:11.640163  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.82439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.640494  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 04:28:11.648350  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.648382  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.648487  108327 httplog.go:90] GET /healthz: (1.13405ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.658799  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.579711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.679267  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.084555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.679629  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 04:28:11.698712  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.464598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.700659  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.47225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.719882  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.627658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.720189  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0920 04:28:11.738772  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.738949  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.739189  108327 httplog.go:90] GET /healthz: (1.30494ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:11.739077  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.815155ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.741101  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.230408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.748491  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.748544  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.748577  108327 httplog.go:90] GET /healthz: (1.209466ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.759741  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.386517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.760253  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 04:28:11.778658  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.399261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.780674  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.504819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.799436  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.254117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.799757  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 04:28:11.818614  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.416859ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.820556  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.429884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.839270  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.839305  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.839345  108327 httplog.go:90] GET /healthz: (1.415986ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:11.839706  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.451381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.839966  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 04:28:11.848558  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.848602  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.848644  108327 httplog.go:90] GET /healthz: (1.22335ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.858948  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.73755ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.860942  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.46762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.865727  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.865756  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.865920  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.867771  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.867798  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.867880  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.868270  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:11.879549  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.329782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.879982  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 04:28:11.898763  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.523226ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.900935  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.600121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.919526  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.298104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.919806  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 04:28:11.938659  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.458341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.938835  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.938869  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.938913  108327 httplog.go:90] GET /healthz: (1.037061ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:11.940536  108327 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.432214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.948572  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:11.948612  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:11.948730  108327 httplog.go:90] GET /healthz: (1.343561ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.959858  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.603159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.960244  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 04:28:11.978678  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.476784ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.980932  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.675209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.999670  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.470037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:11.999952  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0920 04:28:12.018796  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.514667ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.020948  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.551598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.038858  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:12.039030  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:12.039196  108327 httplog.go:90] GET /healthz: (1.280189ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:12.039795  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.641656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.040067  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 04:28:12.048383  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:12.048442  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:12.048487  108327 httplog.go:90] GET /healthz: (1.149362ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.058719  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.484198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.060782  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.288413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.079243  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.994609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.079532  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 04:28:12.082907  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.65797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:12.084759  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.468046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:12.086699  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.356916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:12.098869  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.66248ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.100947  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.353668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.119375  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.121905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.119932  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 04:28:12.138824  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.625675ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.139071  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:12.139274  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:12.139526  108327 httplog.go:90] GET /healthz: (1.550914ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:12.140512  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.155684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.148441  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:12.148465  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:12.148501  108327 httplog.go:90] GET /healthz: (1.199176ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.159669  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.455247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.159999  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 04:28:12.178648  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.425871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.180739  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.468341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.199245  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.067569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.199538  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 04:28:12.218611  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.47126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.220523  108327 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.436984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.239046  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:28:12.239096  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:28:12.239152  108327 httplog.go:90] GET /healthz: (1.241155ms) 0 [Go-http-client/1.1 127.0.0.1:37698]
I0920 04:28:12.239845  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.586115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.240336  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 04:28:12.248777  108327 httplog.go:90] GET /healthz: (1.31912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.250458  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.276464ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.252670  108327 httplog.go:90] POST /api/v1/namespaces: (1.718644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.254008  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.012351ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.257791  108327 httplog.go:90] POST /api/v1/namespaces/default/services: (3.414122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.259319  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (949.085µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.261785  108327 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.624888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.292930  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.292937  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.292951  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.292958  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.293003  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.293152  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.339251  108327 httplog.go:90] GET /healthz: (1.236452ms) 200 [Go-http-client/1.1 127.0.0.1:37696]
W0920 04:28:12.340617  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.340663  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.340700  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.340709  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.340824  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.340852  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.340863  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.340871  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.340967  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.340991  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.341000  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.341115  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:28:12.341151  108327 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0920 04:28:12.341163  108327 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 04:28:12.341450  108327 shared_informer.go:197] Waiting for caches to sync for scheduler
I0920 04:28:12.341714  108327 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 04:28:12.341737  108327 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 04:28:12.342937  108327 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (807.214µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37696]
I0920 04:28:12.344065  108327 get.go:251] Starting watch for /api/v1/pods, rv=59612 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m6s
I0920 04:28:12.364652  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.364755  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.364782  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.365714  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.365954  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.366468  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.441626  108327 shared_informer.go:227] caches populated
I0920 04:28:12.441661  108327 shared_informer.go:204] Caches are synced for scheduler 
I0920 04:28:12.442077  108327 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442232  108327 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442271  108327 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442295  108327 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442311  108327 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442273  108327 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442335  108327 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442340  108327 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442076  108327 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442425  108327 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442177  108327 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442510  108327 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442183  108327 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.442722  108327 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.443359  108327 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (598.996µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37698]
I0920 04:28:12.443380  108327 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (345.21µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37704]
I0920 04:28:12.443446  108327 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (389.124µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37716]
I0920 04:28:12.443499  108327 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (482.077µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37720]
I0920 04:28:12.443560  108327 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (509.078µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37708]
I0920 04:28:12.443817  108327 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (313.576µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37706]
I0920 04:28:12.443964  108327 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (358.172µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37718]
I0920 04:28:12.444072  108327 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59612 labels= fields= timeout=5m29s
I0920 04:28:12.444124  108327 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59612 labels= fields= timeout=9m46s
I0920 04:28:12.444470  108327 get.go:251] Starting watch for /api/v1/services, rv=59726 labels= fields= timeout=8m4s
I0920 04:28:12.444586  108327 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59612 labels= fields= timeout=7m41s
I0920 04:28:12.444660  108327 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59612 labels= fields= timeout=7m10s
I0920 04:28:12.444763  108327 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59612 labels= fields= timeout=5m41s
I0920 04:28:12.444859  108327 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.444884  108327 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.445023  108327 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59612 labels= fields= timeout=9m47s
I0920 04:28:12.445038  108327 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.445055  108327 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.445353  108327 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (268.816µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37724]
I0920 04:28:12.445548  108327 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.445586  108327 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.445861  108327 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (401.811µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37726]
I0920 04:28:12.445967  108327 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59612 labels= fields= timeout=6m41s
I0920 04:28:12.446375  108327 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (297.66µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37728]
I0920 04:28:12.446505  108327 get.go:251] Starting watch for /api/v1/nodes, rv=59612 labels= fields= timeout=6m28s
I0920 04:28:12.447027  108327 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59612 labels= fields= timeout=5m31s
I0920 04:28:12.464360  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.763131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:12.466447  108327 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.485061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:12.468163  108327 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.274569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:12.500117  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.542006  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542106  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542118  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542125  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542132  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542142  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542148  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542155  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542160  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542170  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542181  108327 shared_informer.go:227] caches populated
I0920 04:28:12.542241  108327 node_lifecycle_controller.go:327] Sending events to api server.
I0920 04:28:12.542315  108327 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0920 04:28:12.542350  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:28:12.542605  108327 taint_manager.go:162] Sending events to api server.
I0920 04:28:12.542705  108327 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 04:28:12.542735  108327 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0920 04:28:12.542752  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:28:12.542778  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:28:12.542943  108327 node_lifecycle_controller.go:488] Starting node controller
I0920 04:28:12.542978  108327 shared_informer.go:197] Waiting for caches to sync for taint
I0920 04:28:12.545278  108327 httplog.go:90] POST /api/v1/namespaces: (1.762275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37730]
I0920 04:28:12.545691  108327 node_lifecycle_controller.go:327] Sending events to api server.
I0920 04:28:12.545760  108327 node_lifecycle_controller.go:359] Controller is using taint based evictions.
I0920 04:28:12.545922  108327 taint_manager.go:162] Sending events to api server.
I0920 04:28:12.546000  108327 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 04:28:12.546110  108327 node_lifecycle_controller.go:465] Controller will taint node by condition.
I0920 04:28:12.546194  108327 node_lifecycle_controller.go:488] Starting node controller
I0920 04:28:12.546231  108327 shared_informer.go:197] Waiting for caches to sync for taint
I0920 04:28:12.546460  108327 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.546516  108327 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.547595  108327 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (676.711µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37730]
I0920 04:28:12.548688  108327 get.go:251] Starting watch for /api/v1/namespaces, rv=59728 labels= fields= timeout=7m3s
I0920 04:28:12.569758  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.646374  108327 shared_informer.go:227] caches populated
I0920 04:28:12.646518  108327 shared_informer.go:227] caches populated
I0920 04:28:12.646529  108327 shared_informer.go:227] caches populated
I0920 04:28:12.646695  108327 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.646725  108327 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.646785  108327 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.646843  108327 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.646959  108327 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.646983  108327 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0920 04:28:12.647996  108327 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (514.52µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37736]
I0920 04:28:12.648012  108327 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (466.066µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37734]
I0920 04:28:12.648001  108327 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (542.067µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37732]
I0920 04:28:12.648648  108327 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59612 labels= fields= timeout=7m49s
I0920 04:28:12.648692  108327 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59612 labels= fields= timeout=5m43s
I0920 04:28:12.648966  108327 get.go:251] Starting watch for /api/v1/pods, rv=59612 labels= fields= timeout=7m24s
I0920 04:28:12.679539  108327 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0920 04:28:12.679580  108327 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0920 04:28:12.679622  108327 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0920 04:28:12.679629  108327 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0920 04:28:12.679639  108327 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0920 04:28:12.679645  108327 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0920 04:28:12.679690  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"c7acae9f-e174-4c6f-82e3-7dc76ef8f5dd", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0920 04:28:12.679735  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"cdfc7aee-5c77-42ad-987e-abae06c9bdc2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0920 04:28:12.679748  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"b097d321-a892-43cc-a56a-2f6364f628f7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0920 04:28:12.682438  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (2.418424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:12.684847  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.848412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:12.686973  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.439569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:12.722300  108327 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0920 04:28:12.722343  108327 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0920 04:28:12.722366  108327 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0920 04:28:12.722373  108327 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0920 04:28:12.722384  108327 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0920 04:28:12.722406  108327 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0920 04:28:12.722461  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"cdfc7aee-5c77-42ad-987e-abae06c9bdc2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0920 04:28:12.722500  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"b097d321-a892-43cc-a56a-2f6364f628f7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0920 04:28:12.722509  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"c7acae9f-e174-4c6f-82e3-7dc76ef8f5dd", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0920 04:28:12.725656  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (2.794664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:12.728315  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (2.042236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:12.730804  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.713395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:12.743171  108327 shared_informer.go:227] caches populated
I0920 04:28:12.743221  108327 shared_informer.go:204] Caches are synced for taint 
I0920 04:28:12.743274  108327 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 04:28:12.746445  108327 shared_informer.go:227] caches populated
I0920 04:28:12.746471  108327 shared_informer.go:204] Caches are synced for taint 
I0920 04:28:12.746497  108327 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 04:28:12.746774  108327 shared_informer.go:227] caches populated
I0920 04:28:12.746864  108327 shared_informer.go:227] caches populated
I0920 04:28:12.746931  108327 shared_informer.go:227] caches populated
I0920 04:28:12.746992  108327 shared_informer.go:227] caches populated
I0920 04:28:12.747057  108327 shared_informer.go:227] caches populated
I0920 04:28:12.747115  108327 shared_informer.go:227] caches populated
I0920 04:28:12.747192  108327 shared_informer.go:227] caches populated
I0920 04:28:12.747272  108327 shared_informer.go:227] caches populated
I0920 04:28:12.747344  108327 shared_informer.go:227] caches populated
I0920 04:28:12.747430  108327 shared_informer.go:227] caches populated
I0920 04:28:12.747497  108327 shared_informer.go:227] caches populated
I0920 04:28:12.750608  108327 httplog.go:90] POST /api/v1/nodes: (2.22985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.751046  108327 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0920 04:28:12.751178  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:28:12.751198  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:28:12.751211  108327 taint_manager.go:438] Updating known taints on node node-0: []
I0920 04:28:12.751221  108327 taint_manager.go:438] Updating known taints on node node-0: []
I0920 04:28:12.753431  108327 httplog.go:90] POST /api/v1/nodes: (2.078526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.753489  108327 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0920 04:28:12.753524  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:28:12.753533  108327 taint_manager.go:438] Updating known taints on node node-1: []
I0920 04:28:12.753555  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:28:12.753559  108327 taint_manager.go:438] Updating known taints on node node-1: []
I0920 04:28:12.755851  108327 httplog.go:90] POST /api/v1/nodes: (1.754893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.756207  108327 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0920 04:28:12.756265  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:28:12.756440  108327 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:28:12.756278  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:28:12.756681  108327 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:28:12.758352  108327 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/pods: (2.014587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.758726  108327 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2
I0920 04:28:12.758738  108327 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf", Name:"testpod-2"}
I0920 04:28:12.758748  108327 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2
I0920 04:28:12.758776  108327 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf", Name:"testpod-2"}
I0920 04:28:12.759043  108327 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2", node "node-2"
I0920 04:28:12.759062  108327 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2", node "node-2": all PVCs bound and nothing to do
I0920 04:28:12.759102  108327 factory.go:606] Attempting to bind testpod-2 to node-2
I0920 04:28:12.761002  108327 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/pods/testpod-2/binding: (1.699363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.761224  108327 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf", Name:"testpod-2"}
I0920 04:28:12.761453  108327 scheduler.go:662] pod taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 is bound successfully on node "node-2", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0920 04:28:12.761480  108327 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf", Name:"testpod-2"}
I0920 04:28:12.763110  108327 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/events: (1.344779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.861178  108327 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/pods/testpod-2: (1.952025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.863335  108327 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/pods/testpod-2: (1.431884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.865263  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.291797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.865897  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.865905  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.866058  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.867967  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.868076  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.868075  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.868209  108327 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.320712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.868445  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:12.869488  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (438.508µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.869815  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (527.424µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:12.872974  108327 store.go:362] GuaranteedUpdate of /79daed50-3f61-49ed-b9a7-7aec41623006/minions/node-2 failed because of a conflict, going to retry
I0920 04:28:12.873018  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.215041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:12.873345  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:12.868841766 +0000 UTC m=+361.167609301,}] Taint to Node node-2
I0920 04:28:12.873422  108327 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 04:28:12.873792  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.138723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:12.874050  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:12.868840468 +0000 UTC m=+361.167608034,}] Taint to Node node-2
I0920 04:28:12.874129  108327 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 04:28:12.971238  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.144525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.070912  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.999493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.171025  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.058768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.271025  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.00403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.293174  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.293180  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.293184  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.293231  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.293194  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.293328  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.364817  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.364871  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.364930  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.365888  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.366190  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.366630  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.370787  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.759113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.443978  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.444296  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.444357  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.444472  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.446282  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.446883  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.471038  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.054263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.500435  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.570001  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.570949  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.867844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.648720  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.671127  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.156604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.770891  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.889316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.866159  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.866154  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.866226  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.868049  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.868310  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.868318  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.868624  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:13.870899  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.982104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:13.970927  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.876745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.071138  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.019212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.170819  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.803972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.271312  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.276571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.293424  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.293428  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.293448  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.293455  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.293476  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.293468  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.365081  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.365083  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.365111  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.366120  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.366502  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.366783  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.371314  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.259005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.444193  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.444535  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.444589  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.444619  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.446471  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.447051  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.470981  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.08752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.500794  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.570193  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.571091  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.911992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.649032  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.670958  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.005003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.770987  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.959338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.866541  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.866552  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.866634  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.868336  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.868522  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.868615  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.868892  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:14.870841  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.812523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:14.970954  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.954015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.070637  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.629935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.170730  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.736536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.271601  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.570843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.293759  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.293759  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.293935  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.293956  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.293974  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.294007  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.365519  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.365646  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.365649  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.366294  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.366701  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.366954  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.370805  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.81601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.444323  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.444728  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.444769  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.444786  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.446688  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.447198  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.471065  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.074854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.500997  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.570365  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.570784  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.817061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.649220  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.671076  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.980876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.750766  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.765703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:15.752853  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.5821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:15.754831  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.353976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:15.770981  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.985124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.866786  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.866891  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.866908  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.868631  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.868659  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.868845  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.869101  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:15.870803  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.755938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:15.970995  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.044833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.071099  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.984191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.171327  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.190331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.271287  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.250991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.294066  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.294109  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.294129  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.294087  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.294159  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.294114  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.365726  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.365726  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.365823  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.366460  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.366890  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.367143  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.370869  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.871476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.444440  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.444923  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.444931  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.444950  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.446875  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.447366  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.470822  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.776534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.501330  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.570571  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.570825  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.892271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.649514  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.670740  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.82519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.770753  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.774945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.866977  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.866985  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.867046  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.868815  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.868915  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.868926  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.869251  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:16.870714  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.722759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:16.971271  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.295809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.070977  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.888828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.163222  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.844229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:17.165242  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.371305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:17.167336  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.421143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:17.170404  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.529749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.271210  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.12669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.294280  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.294305  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.294323  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.294278  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.294306  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.294305  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.366134  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.366152  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.366134  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.366673  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.367151  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.367359  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.370959  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.002857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.444806  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.445170  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.445181  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.445297  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.447075  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.447543  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.470914  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.902259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.501611  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.570751  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.571064  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.942801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.649745  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.670946  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.88383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.743517  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 04:28:17.743577  108327 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 04:28:17.743702  108327 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 04:28:17.743765  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"3c725a3c-57bb-4cdf-8bda-f5207514a389", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 04:28:17.743886  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 04:28:17.743996  108327 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
I0920 04:28:17.744122  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 04:28:17.744202  108327 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
W0920 04:28:17.744337  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0920 04:28:17.744510  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 04:28:17.744623  108327 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 04:28:17.74460286 +0000 UTC m=+366.043370399. Adding it to the Taint queue.
W0920 04:28:17.744738  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
I0920 04:28:17.744852  108327 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 04:28:17.744218  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"f2b734e7-b7d5-494d-96bb-54f75b2fe754", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 04:28:17.745074  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"0b869b0a-f7cb-4fde-ab21-bb12d76534d7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 04:28:17.746766  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 04:28:17.746806  108327 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 04:28:17.746939  108327 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 04:28:17.746964  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 04:28:17.746972  108327 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 04:28:17.746985  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 04:28:17.746990  108327 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0920 04:28:17.747054  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 04:28:17.747127  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0920 04:28:17.747186  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 04:28:17.747250  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"3c725a3c-57bb-4cdf-8bda-f5207514a389", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 04:28:17.747382  108327 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 04:28:17.74720346 +0000 UTC m=+366.045970996. Adding it to the Taint queue.
I0920 04:28:17.747460  108327 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 04:28:17.747380  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"0b869b0a-f7cb-4fde-ab21-bb12d76534d7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 04:28:17.747627  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"f2b734e7-b7d5-494d-96bb-54f75b2fe754", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 04:28:17.748005  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (2.564981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.749852  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (2.154313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:17.750340  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.783002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.751783  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (694.187µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37742]
I0920 04:28:17.751864  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.439184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:17.752628  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.861387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.753937  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.489423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:17.754456  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (483.177µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.755699  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.84599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37742]
I0920 04:28:17.755931  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:28:17.750585963 +0000 UTC m=+366.049353526,}] Taint to Node node-2
I0920 04:28:17.755984  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:28:17.756087  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:28:17.756111  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:28:17 +0000 UTC}]
I0920 04:28:17.756156  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:28:17.756175  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 at 2019-09-20 04:28:17.75616594 +0000 UTC m=+366.054933499 to be fired at 2019-09-20 04:28:17.75616594 +0000 UTC m=+366.054933499
I0920 04:28:17.756171  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:28:17 +0000 UTC}]
I0920 04:28:17.756206  108327 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2
I0920 04:28:17.756226  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 at 2019-09-20 04:28:17.756217247 +0000 UTC m=+366.054984806 to be fired at 2019-09-20 04:28:17.756217247 +0000 UTC m=+366.054984806
I0920 04:28:17.756264  108327 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2
I0920 04:28:17.756515  108327 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2
I0920 04:28:17.756569  108327 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2
I0920 04:28:17.757683  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.21258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:17.757863  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:28:17.753731152 +0000 UTC m=+366.052498711,}] Taint to Node node-2
I0920 04:28:17.757890  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:28:17.758239  108327 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/events: (1.173122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:17.758633  108327 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/events: (1.599146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37744]
I0920 04:28:17.759079  108327 store.go:362] GuaranteedUpdate of /79daed50-3f61-49ed-b9a7-7aec41623006/pods/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 failed because of a conflict, going to retry
I0920 04:28:17.759385  108327 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/pods/testpod-2: (2.802532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37738]
I0920 04:28:17.759438  108327 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/pods/testpod-2: (2.954856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37742]
I0920 04:28:17.770832  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.865739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:17.867217  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.867247  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.867225  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.869025  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.869072  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.869073  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.869427  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:17.871123  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.933047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:17.970872  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.880786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.071042  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.083135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.170758  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.755136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.270944  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.905922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.294533  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.294570  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.294533  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.294533  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.294539  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.294542  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.366289  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.366362  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.366376  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.366835  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.367319  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.367577  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.370905  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.88245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.445051  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.445369  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.445451  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.445479  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.447379  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.447718  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.470916  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.921752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.502027  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.570971  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.570994  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.852826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.649994  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.671069  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.105067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.770746  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.774886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.867447  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.867447  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.867464  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.869198  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.869244  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.869264  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.869591  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:18.870841  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.905161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:18.970773  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.773937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.071117  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.990947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.170986  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.968878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.270815  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.849876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.294775  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.294793  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.294816  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.294849  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.294868  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.294876  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.366694  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.366766  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.366779  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.366971  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.367616  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.367779  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.370783  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.842337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.445277  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.445679  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.445684  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.445763  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.447681  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.447953  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.470748  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.835095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.502213  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.571097  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.0437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.571165  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.650224  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.670769  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.837285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.771092  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.006048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.867744  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.867830  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.867845  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.869463  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.869496  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.869562  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.869829  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:19.871018  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.068782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:19.970950  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.994033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.070886  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.919624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.170854  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.959504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.270895  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.898522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.294993  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.295019  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.294988  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.295057  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.295066  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.295072  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.366895  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.366895  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.366921  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.367112  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.367796  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.367945  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.370924  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.920925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.445654  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.446038  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.446042  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.446050  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.447985  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.448237  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.470926  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.894643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.502453  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.570930  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.897957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.571312  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.650472  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.670828  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.852048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.770747  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.746264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.867964  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.867964  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.867974  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.869634  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.869634  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.869704  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.870006  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:20.870890  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.879279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:20.970833  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.83029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.070738  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.787675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.170810  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.879043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.271175  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.10889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.295201  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.295244  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.295256  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.295226  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.295269  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.295226  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.367124  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.367124  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.367138  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.367276  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.367962  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.368170  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.370982  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.028465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.445867  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.446228  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.446227  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.446409  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.448234  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.448429  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.470935  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.994564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.502918  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.571294  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.256904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.571626  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.650693  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.670984  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.945168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.770907  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.97455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.868236  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.868382  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.868482  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.869837  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.869840  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.869919  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.870166  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:21.870835  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.818061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:21.971005  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.976916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.070707  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.755718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.083338  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.800107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:22.085435  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.535605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:22.087223  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.292332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:22.170910  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.847556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.251256  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.864749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.253347  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.361053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.255067  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.211218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.270972  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.947712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.295456  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.295454  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.295498  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.295475  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.295507  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.295534  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.367362  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.367383  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.367361  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.367773  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.368140  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.368346  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.370908  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.991788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.446048  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.446417  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.446549  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.446427  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.448477  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.448608  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.470876  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.857923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.503113  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.570865  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.886533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.571762  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.650908  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.671159  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.052594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.745246  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000413967s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:28:22.745322  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0920 04:28:22.745336  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0920 04:28:22.745377  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0920 04:28:22.747732  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.000527538s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:28:22.747782  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 04:28:22.747791  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 04:28:22.747799  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 04:28:22.749238  108327 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.161351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.749759  108327 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0920 04:28:22.749795  108327 controller_utils.go:124] Update ready status of pods on node [node-0]
I0920 04:28:22.749939  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"0b869b0a-f7cb-4fde-ab21-bb12d76534d7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0920 04:28:22.750811  108327 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.613725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.750876  108327 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (1.125374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.751174  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.004102584s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:28:22.751238  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0920 04:28:22.751249  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0920 04:28:22.751254  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0920 04:28:22.751257  108327 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (442.222µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37752]
I0920 04:28:22.752092  108327 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (1.824521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37750]
I0920 04:28:22.752137  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (407.975µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.752292  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (410.295µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37752]
I0920 04:28:22.752356  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.007889958s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:28:22.752443  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0920 04:28:22.752455  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0920 04:28:22.752496  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0920 04:28:22.752858  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (2.053108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37756]
I0920 04:28:22.753464  108327 httplog.go:90] PUT /api/v1/nodes/node-0/status: (1.729983ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
E0920 04:28:22.753651  108327 node_lifecycle_controller.go:1037] Error updating node node-0: Operation cannot be fulfilled on nodes "node-0": the object has been modified; please apply your changes to the latest version and try again
I0920 04:28:22.754830  108327 httplog.go:90] GET /api/v1/nodes/node-0: (1.010449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.755545  108327 store.go:362] GuaranteedUpdate of /79daed50-3f61-49ed-b9a7-7aec41623006/minions/node-0 failed because of a conflict, going to retry
I0920 04:28:22.756125  108327 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.174335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.756144  108327 httplog.go:90] PUT /api/v1/nodes/node-1/status: (3.128807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37758]
I0920 04:28:22.756459  108327 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0920 04:28:22.756496  108327 controller_utils.go:124] Update ready status of pods on node [node-1]
I0920 04:28:22.756558  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:22.749714323 +0000 UTC m=+371.048481881,}] Taint to Node node-0
I0920 04:28:22.756612  108327 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 04:28:22.756633  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"3c725a3c-57bb-4cdf-8bda-f5207514a389", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0920 04:28:22.756857  108327 store.go:362] GuaranteedUpdate of /79daed50-3f61-49ed-b9a7-7aec41623006/minions/node-2 failed because of a conflict, going to retry
I0920 04:28:22.756922  108327 httplog.go:90] PATCH /api/v1/nodes/node-0: (4.66666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37760]
I0920 04:28:22.757202  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:22.749497191 +0000 UTC m=+371.048264750,}] Taint to Node node-0
I0920 04:28:22.757243  108327 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 04:28:22.757878  108327 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.191545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.758052  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.013453884s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:28:22.758090  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 04:28:22.758100  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 04:28:22.758108  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 04:28:22.758116  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (4.659893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37752]
I0920 04:28:22.758255  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (4.185287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37750]
I0920 04:28:22.758508  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:22.751261899 +0000 UTC m=+371.050029458,}] Taint to Node node-2
I0920 04:28:22.758582  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:22.751353209 +0000 UTC m=+371.050120778,}] Taint to Node node-2
I0920 04:28:22.758733  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.825877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.758870  108327 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (417.94µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37760]
I0920 04:28:22.759183  108327 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (691.604µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37746]
I0920 04:28:22.759192  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (450.446µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37762]
I0920 04:28:22.760072  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (492.925µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.760120  108327 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.450222ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37752]
E0920 04:28:22.760274  108327 node_lifecycle_controller.go:1037] Error updating node node-2: Operation cannot be fulfilled on nodes "node-2": the object has been modified; please apply your changes to the latest version and try again
I0920 04:28:22.761566  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.113963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37752]
I0920 04:28:22.762465  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.219064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37750]
I0920 04:28:22.762533  108327 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.295296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37762]
I0920 04:28:22.762835  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:12 +0000 UTC,}] Taint
I0920 04:28:22.762895  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:22.758021773 +0000 UTC m=+371.056789333,}] Taint to Node node-1
I0920 04:28:22.762925  108327 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 04:28:22.763039  108327 store.go:362] GuaranteedUpdate of /79daed50-3f61-49ed-b9a7-7aec41623006/minions/node-2 failed because of a conflict, going to retry
I0920 04:28:22.764462  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.333295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.764902  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:12 +0000 UTC,}] Taint
I0920 04:28:22.765119  108327 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.771069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37750]
I0920 04:28:22.765380  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:28:22.758129894 +0000 UTC m=+371.056897448,}] Taint to Node node-1
I0920 04:28:22.765437  108327 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 04:28:22.769959  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.159613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.775264  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.028189198s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:22.775313  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.028245678s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:22.775339  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.028272272s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:22.775354  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.028287746s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:22.775457  108327 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:28:22.775432385 +0000 UTC m=+371.074199960. Adding it to the Taint queue.
I0920 04:28:22.775498  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.02835412s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:28:22.775522  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0920 04:28:22.775530  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0920 04:28:22.775536  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0920 04:28:22.777576  108327 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.721549ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
E0920 04:28:22.777830  108327 node_lifecycle_controller.go:1037] Error updating node node-1: Operation cannot be fulfilled on nodes "node-1": the object has been modified; please apply your changes to the latest version and try again
I0920 04:28:22.779256  108327 httplog.go:90] GET /api/v1/nodes/node-1: (1.217821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.782035  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.037433011s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:22.782086  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.037488437s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:22.782120  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.037523866s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:22.782135  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.03753942s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:22.782905  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (458.36µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.786208  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.397647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.786510  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:28:22.782191942 +0000 UTC m=+371.080959502,}] Taint to Node node-2
I0920 04:28:22.786740  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:28:22.786744  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:28:22.786759  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:28:17 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:28:22 +0000 UTC}]
I0920 04:28:22.786766  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:28:17 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:28:22 +0000 UTC}]
I0920 04:28:22.786791  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 at 2019-09-20 04:28:22.786780689 +0000 UTC m=+371.085548245 to be fired at 2019-09-20 04:28:22.786780689 +0000 UTC m=+371.085548245
I0920 04:28:22.786797  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 at 2019-09-20 04:28:22.786788626 +0000 UTC m=+371.085556180 to be fired at 2019-09-20 04:28:22.786788626 +0000 UTC m=+371.085556180
W0920 04:28:22.786801  108327 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2}. Skipping.
W0920 04:28:22.786808  108327 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2}. Skipping.
I0920 04:28:22.787490  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (615.411µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.790474  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.22897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.790859  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:28:22.790934  108327 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 04:28:22.790962  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:28:22.790978  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:28:22 +0000 UTC}]
I0920 04:28:22.791001  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 at 2019-09-20 04:28:22.790993776 +0000 UTC m=+371.089761333 to be fired at 2019-09-20 04:33:22.790993776 +0000 UTC m=+671.089761333
W0920 04:28:22.791021  108327 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2}. Skipping.
I0920 04:28:22.791068  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:28:22.791075  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:28:22 +0000 UTC}]
I0920 04:28:22.791089  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 at 2019-09-20 04:28:22.791084669 +0000 UTC m=+371.089852227 to be fired at 2019-09-20 04:33:22.791084669 +0000 UTC m=+671.089852227
W0920 04:28:22.791115  108327 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2}. Skipping.
I0920 04:28:22.791480  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (396.492µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.799840  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.052683632s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:22.799891  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.052747472s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:22.799907  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.052763983s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:22.799918  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.052774914s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:22.799969  108327 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:28:22.79995204 +0000 UTC m=+371.098719592. Adding it to the Taint queue.
I0920 04:28:22.799995  108327 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 04:28:22.800946  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (728.798µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.868443  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.868705  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.868705  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.870117  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.870154  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.870170  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.870325  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:22.871063  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.223348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:22.970795  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.744912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.071267  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.182693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.170908  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.959702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.271063  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.01459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.295669  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.295687  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.295709  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.295723  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.295735  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.295687  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.367542  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.367607  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.367617  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.367898  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.368371  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.368730  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.370895  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.864315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.446227  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.446645  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.446684  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.446839  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.448868  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.448868  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.470846  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.894055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.503552  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.570639  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.716717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.571953  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.651161  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.671423  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.311043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.770983  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.995651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.868677  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.868857  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.868962  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.870332  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.870348  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.870357  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.870523  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:23.870919  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.944517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:23.971151  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.003889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.070879  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.898372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.170953  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.052939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.271008  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.995431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.295954  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.295956  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.295954  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.295954  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.295962  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.295966  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.367734  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.367734  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.367816  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.368066  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.368571  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.368891  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.370958  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.971585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.446409  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.446811  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.446850  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.446980  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.449080  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.449108  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.471015  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.051103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.503927  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.570925  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.932849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.572212  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.651562  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.670920  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.940981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.770993  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.972132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.868862  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.869032  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.869162  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.870502  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.870521  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.870543  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.870707  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:24.871000  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.056675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:24.971015  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.016749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.070786  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.874405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.171090  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.132299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.271306  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.297849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.296253  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.296288  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.296289  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.296300  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.296271  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.296268  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.368045  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.368043  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.368063  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.368368  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.368739  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.369078  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.371092  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.13913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.446819  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.447073  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.447103  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.447101  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.449271  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.449285  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.470971  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.986575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.504150  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.570776  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.819347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.572468  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.651923  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.670831  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.894746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.750957  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.808389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:25.752863  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.412098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:25.754806  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.344644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:25.771053  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.076107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.869157  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.869294  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.869430  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.870673  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.870677  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.870869  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.870682  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:25.871211  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.182286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:25.970982  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.014618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.070979  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.058844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.171343  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.181101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.270676  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.778107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.296468  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.296472  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.296486  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.296496  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.296527  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.296526  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.368251  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.368287  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.368266  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.368560  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.369066  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.369338  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.371212  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.069762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.447013  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.447221  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.447223  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.447302  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.449447  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.449514  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.471070  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.056048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.504627  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.570927  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.867387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.572648  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.652145  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.670924  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.922384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.770978  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.922117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.869340  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.869459  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.869669  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.870831  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.870920  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.870988  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.871121  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.12868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:26.871141  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:26.971018  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.98505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.071074  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.050143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.163380  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.874904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:27.165439  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.464971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:27.167089  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.225827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:27.170589  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.749142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.270929  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.944004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.296785  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.297114  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.297128  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.297136  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.297139  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.297151  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.368487  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.368738  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.368499  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.368518  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.369274  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.369536  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.371049  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.057698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.447244  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.447364  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.447405  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.447479  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.449838  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.449848  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.470880  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.983398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.504816  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.571196  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.138216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.572822  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.652357  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.671053  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.00005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.771119  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.120288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.791989  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.047381247s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:27.792051  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.047455608s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.792065  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.047469843s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.792079  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.047483996s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.792134  108327 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 04:28:27.7921173 +0000 UTC m=+376.090884907. Adding it to the Taint queue.
I0920 04:28:27.792237  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.047418769s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:27.792265  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.04744743s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.792275  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.047458249s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.792287  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.047470172s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.792318  108327 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:28:27.792310969 +0000 UTC m=+376.091078525. Adding it to the Taint queue.
I0920 04:28:27.792360  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.047897908s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:27.792405  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.047925637s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.792425  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.047963885s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.792435  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.047973842s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.792460  108327 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:28:27.792453169 +0000 UTC m=+376.091220728. Adding it to the Taint queue.
I0920 04:28:27.801762  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.054684703s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:27.801820  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.054753434s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.801836  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.054769195s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.801848  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.054781016s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.801921  108327 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:28:27.801904619 +0000 UTC m=+376.100672181. Adding it to the Taint queue.
I0920 04:28:27.801954  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.054810917s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:27.801967  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.054822697s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.801979  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.054835809s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.801994  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.05485047s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.802015  108327 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:28:27.802008486 +0000 UTC m=+376.100776042. Adding it to the Taint queue.
I0920 04:28:27.802158  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.054958755s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:27.802209  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.055011246s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.802249  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.055050146s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.802286  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.055088831s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:27.802339  108327 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 04:28:27.802328782 +0000 UTC m=+376.101096340. Adding it to the Taint queue.
I0920 04:28:27.869486  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.869620  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.869855  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.870987  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.871049  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.871138  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.146293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:27.871155  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.871257  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:27.971073  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.02672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.070900  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.955109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.171050  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.081152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.271061  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.031482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.297033  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.297271  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.297282  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.297298  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.297381  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.297411  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.368947  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.368947  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.368955  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.368961  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.369599  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.369830  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.371104  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.107515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.447518  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.447517  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.447448  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.447679  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.450187  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.450194  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.471156  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.223092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.505074  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.570940  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.978689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.572991  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.652605  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.671057  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.917983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.770990  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.964104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.870000  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.870159  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.870195  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.871099  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.042957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:28.871297  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.871327  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.871534  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.871557  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:28.970747  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.741791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.071084  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.01142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.170936  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.992172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.270930  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.916172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.297294  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.297466  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.297474  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.297471  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.297527  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.297542  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.369369  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.369419  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.369444  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.369368  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.369804  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.370081  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.371246  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.196541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.447784  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.447818  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.447783  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.447804  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.450426  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.450443  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.471263  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.300555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.505313  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.571013  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.998481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.573173  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.652873  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.670997  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.043277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.771114  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.146899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.870158  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.870338  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.870362  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.871057  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.112565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:29.871468  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.871499  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.871738  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.871743  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:29.971342  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.302906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.070908  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.95019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.171309  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.309888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.270918  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.924137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.297572  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.297702  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.297702  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.297731  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.297736  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.297736  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.369602  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.369619  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.369609  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.369611  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.369944  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.370257  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.371059  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.024113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.448006  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.448060  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.448093  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.448100  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.450611  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.450667  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.471537  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.445304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.505515  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.571859  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.621052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.573485  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.653133  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.671146  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.105824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.771262  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.249655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.870615  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.870810  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.870812  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.871153  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.1292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:30.871702  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.871840  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.871840  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.871939  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:30.971184  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.21178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.070905  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.888182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.170995  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.855369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.271257  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.19032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.298011  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.298011  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.298011  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.298029  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.298030  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.298042  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.369736  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.369750  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.369776  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.369792  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.370107  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.370456  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.371221  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.125112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.448247  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.448274  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.448248  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.448266  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.450786  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.450908  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.471132  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.067361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.505738  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.570755  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.790956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.573702  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.653369  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.670962  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.050313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.771077  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.903546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.870810  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.870895  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.834415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:31.870970  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.870981  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.871890  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.872009  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.872011  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.872149  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:31.971072  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.043349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.070979  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.014642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.083436  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.785323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:32.085556  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.588477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:32.087409  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.330691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:32.170828  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.835937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.251261  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.681933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.253343  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.52229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.255154  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.221725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.270800  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.902017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.298370  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.298505  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.298521  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.298551  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.298698  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.298736  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.370097  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.370097  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.370192  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.370195  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.370302  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.370757  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.370928  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.86858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.448501  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.448514  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.448520  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.448543  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.450967  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.451284  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.471144  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.137096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.505920  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.571109  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.118822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.573879  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.653821  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.670984  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.976293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.770838  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.83923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.792761  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.048288249s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:32.792821  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.048358756s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.792837  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.048376021s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.792850  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.048388669s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.792947  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.048350783s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:32.792968  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.048373147s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.792984  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.048389128s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.792995  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.048400687s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.793033  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.04821647s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:32.793122  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.048300843s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.793233  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.048414168s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.793324  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.048505129s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.802619  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.055464164s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:32.802872  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.055723914s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.802974  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.055829009s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.803042  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.055896837s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.803204  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.056005571s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:32.803300  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.0561s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.803478  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.056275854s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.803561  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.056362231s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.803699  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.056628744s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:32.803788  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.056720108s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.803852  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.056784297s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.803920  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.056852119s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:32.870989  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.871124  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.139024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:32.871269  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.871484  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.872045  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.872154  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.872157  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.872698  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:32.971002  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.056747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.071332  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.321707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.170899  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.906738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.271084  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.000696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.298649  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.298681  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.298649  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.298708  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.298818  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.298832  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.370522  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.370596  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.370768  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.370806  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.370902  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.370973  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.96284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.371424  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.448729  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.448750  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.448729  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.448756  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.451121  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.451528  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.470943  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.921255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.506179  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.570861  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.93582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.574048  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.654057  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.671295  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.222988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.771452  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.440863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.869727  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.676641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:33.870911  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.051043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:33.871171  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.871431  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.871704  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.871793  108327 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.603238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:33.872250  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.872385  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.872438  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.872900  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:33.873506  108327 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.154533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:33.971074  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.07189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.070985  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.97555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.171473  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.488431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.270979  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.952261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.298852  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.298973  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.299013  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.298849  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.298900  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.298877  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.370722  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.370729  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.370769  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.78002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.370929  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.370940  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.371037  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.371564  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.448977  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.448981  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.449005  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.449006  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.451300  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.451716  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.471124  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.136593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.506440  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.571364  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.082074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.574298  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.654379  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.671121  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.141571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.771196  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.247118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.871305  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.253793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:34.871314  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.871665  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.871903  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.872495  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.872611  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.872820  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.873107  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:34.970940  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.947188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.070992  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.912066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.170763  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.81964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.270823  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.884027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.299194  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.299244  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.299208  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.299208  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.299215  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.299220  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.371055  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.050099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.371117  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.371055  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.371417  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.371434  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.371547  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.371772  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.449418  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.449491  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.449674  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.449899  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.451472  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.451931  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.471093  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.120056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.506839  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.571346  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.184133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.574503  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.654645  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.670811  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.900264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.751063  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.744449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:35.753299  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.624346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:35.755285  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.375039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:28:35.771146  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.146113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.870962  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.853492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:35.871520  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.871921  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.872194  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.872718  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.872764  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.872965  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.873283  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:35.971505  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.499394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.071010  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.967171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.170934  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.021636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.271054  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.025349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.299417  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.299428  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.299530  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.299533  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.299531  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.299482  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.370787  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.857404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.371245  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.371280  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.371591  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.371591  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.371709  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.371980  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.449742  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.449740  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.450078  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.450091  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.451711  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.452236  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.470980  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.972075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.507016  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.571125  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.120017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.574710  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.654836  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.671218  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.192497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.770969  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.024398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.870874  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.890775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:36.871693  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.872113  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.872345  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.872851  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.872921  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.873099  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.873431  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:36.971019  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.002704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.070875  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.955654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.163200  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.665456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:37.165478  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.551151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:37.167363  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.15576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42428]
I0920 04:28:37.170142  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.285871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.270709  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.68753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.299641  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.299675  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.299687  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.299699  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.299662  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.299661  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.370918  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.023817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.371427  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.371437  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.371821  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.371843  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.371851  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.372221  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.449880  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.449979  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.450233  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.450255  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.451920  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.452419  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.472841  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.293587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.507254  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.571016  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.034664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.574875  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.655233  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.670769  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.820017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.770824  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.874315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.793735  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.048902866s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:37.793806  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.048987816s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.793821  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.049003433s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.793845  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.049027948s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.793916  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.049454681s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:37.793928  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.049467368s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.793938  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.049477187s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.793953  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.049490314s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.794023  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.049426645s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:37.794045  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.049449791s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.794055  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.04946031s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.794070  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.049475443s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.804553  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.057393401s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:37.804615  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.057470842s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.804629  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.057486342s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.804641  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.057497891s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.804818  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.05758728s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:37.804840  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.057642626s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.804855  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.057658316s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.804866  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.057669036s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.804903  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.057836968s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:37.804919  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.05785296s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.804929  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.057863326s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.804938  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.057872508s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:37.871034  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.026357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:37.871947  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.872293  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.872587  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.873076  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.873094  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.873269  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.873603  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:37.970917  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.896165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.071333  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.306371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.170980  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.995195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.271092  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.048329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.299879  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.299879  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.299879  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.299879  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.299897  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.299909  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.371580  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.21753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.371589  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.372019  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.372052  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.372359  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.372367  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.374714  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.450182  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.450227  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.450524  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.450562  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.452140  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.452722  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.471116  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.118709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.507381  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.571260  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.306111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.575108  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.655642  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.671291  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.316796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.771090  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.045959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.871161  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.094878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:38.872177  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.872460  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.872742  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.873229  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.873233  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.873426  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.873786  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:38.970942  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.999922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.071004  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.980774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.170748  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.85663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.270945  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.959442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.300158  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.300158  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.300281  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.300305  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.300309  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.300311  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.370744  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.630474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.372073  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.372195  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.372221  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.372632  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.372701  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.374920  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.450378  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.450408  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.450667  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.450710  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.452294  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.452946  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.471034  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.018685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.507614  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.570954  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.974566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.575348  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.655910  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.670939  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.971751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.771724  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.718702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.870909  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.886367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:39.872347  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.872676  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.872991  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.873412  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.873435  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.873622  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.873993  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:39.970741  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.768178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.071073  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.073215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.170661  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.685172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.270982  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.984543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.301299  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.301329  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.301357  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.301376  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.301301  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.301320  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.371024  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.043012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.372235  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.372337  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.372369  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.372840  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.372840  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.375203  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.450629  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.450629  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.450821  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.450850  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.452454  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.453138  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.470977  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.948478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.507941  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.571101  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.999888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.575537  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.656122  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.670914  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.912287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.771168  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.177228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.870579  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.644293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:40.872641  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.872920  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.873149  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.873591  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.873699  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.873832  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.874219  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:40.970953  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.915822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.070826  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.847165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.170972  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.03261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.270953  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.837674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.301544  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.301593  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.301629  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.301657  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.301687  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.301597  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.371258  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.163494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.372446  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.372462  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.372568  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.373003  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.373013  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.375271  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.450905  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.451013  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.450929  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.451047  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.452630  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.453342  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.471089  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.126288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.508146  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.570962  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.896363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.575756  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.656349  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.670973  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.978974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.771264  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.241975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.871220  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.27173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:41.872833  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.873109  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.873406  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.873761  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.873907  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.874049  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.874411  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:41.971075  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.007706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.071034  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.054936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.083710  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.88479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:42.086025  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.525148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:42.088064  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.270796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:28:42.171121  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.092868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.251775  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.929084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.253866  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.560978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.255837  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.248201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.271248  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.223083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.301755  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.301856  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.301862  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.301883  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.301909  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.301909  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.371170  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.142189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.372616  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.372618  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.372721  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.373180  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.373199  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.375377  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.451178  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.451178  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.451178  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.451214  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.452811  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.453607  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.471234  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.206541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.508421  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.570936  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.913053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.575936  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.656630  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.671194  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.06851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.771084  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.030515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.794356  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.049522812s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:42.794453  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.049634734s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.794468  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.049650669s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.794499  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.049681399s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.794619  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.050155617s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:42.794642  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.050180582s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.794653  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.050192066s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.794670  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.05020874s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.794714  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.050119438s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:42.794730  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.050133591s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.794749  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.050153052s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.794760  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.050164825s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.805233  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.058156967s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:42.805304  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.058237448s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.805318  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.058251421s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.805329  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.058262661s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.805579  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.058428631s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:42.805623  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.058478893s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.805639  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.058495102s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.805655  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.058510578s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.805808  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.058608183s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:28:42.805832  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.058635054s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.805843  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.058645745s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.805853  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.058655703s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:28:12 +0000 UTC,LastTransitionTime:2019-09-20 04:28:22 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:28:42.871008  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.925275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.872973  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.873274  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.873509  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.873805  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.058624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.873881  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.874033  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
Sep 20 04:28:42.874: INFO: Waiting up to 15s for pod "testpod-2" in namespace "taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf" to be "terminating"
I0920 04:28:42.874246  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.874546  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:42.876023  108327 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/pods/testpod-2: (1.569966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
Sep 20 04:28:42.876: INFO: Pod "testpod-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155487ms
Sep 20 04:28:42.876: INFO: Pod "testpod-2" satisfied condition "terminating"
I0920 04:28:42.881036  108327 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/pods/testpod-2: (4.415532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.881613  108327 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf", Name:"testpod-2"}
I0920 04:28:42.881646  108327 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 at 2019-09-20 04:28:42.8816421 +0000 UTC m=+391.180409660
I0920 04:28:42.881647  108327 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf", Name:"testpod-2"}
I0920 04:28:42.881668  108327 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 at 2019-09-20 04:28:42.881665792 +0000 UTC m=+391.180433351
I0920 04:28:42.883622  108327 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/pods/testpod-2: (947.46µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.888650  108327 node_tree.go:113] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I0920 04:28:42.888695  108327 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 04:28:42.888717  108327 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 04:28:42.890662  108327 node_tree.go:113] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I0920 04:28:42.890712  108327 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 04:28:42.890734  108327 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 04:28:42.893049  108327 httplog.go:90] DELETE /api/v1/nodes: (8.985755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37740]
I0920 04:28:42.893258  108327 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 04:28:42.893263  108327 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 04:28:42.893263  108327 node_tree.go:113] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I0920 04:28:43.302056  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.302081  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.302093  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.302111  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.302081  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.302136  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.372798  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.372798  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.372865  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.373350  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.373490  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.375604  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.451596  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.451608  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.451723  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.451801  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.452987  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.453776  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.508758  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.576152  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.656825  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.873174  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.873590  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.873696  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.874055  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.874201  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.874430  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:28:43.874755  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds (35.09s)
        taint_test.go:770: Failed to taint node in test 2 <node-2>, err: timed out waiting for the condition

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-041605.xml

Find taint-based-evictionsab075fea-193d-4176-9122-ae81dac03dbf/testpod-2 mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds
W0920 04:26:58.607275  108327 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 04:26:58.607306  108327 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 04:26:58.607321  108327 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 04:26:58.607333  108327 master.go:259] Using reconciler: 
I0920 04:26:58.609366  108327 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.609710  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.609899  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.610856  108327 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 04:26:58.610909  108327 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.610947  108327 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 04:26:58.611375  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.611437  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.612349  108327 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:26:58.612420  108327 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:26:58.612409  108327 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.612662  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.612730  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.612754  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.613352  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.613510  108327 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 04:26:58.613545  108327 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.613722  108327 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 04:26:58.613756  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.613777  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.614675  108327 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 04:26:58.614849  108327 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 04:26:58.614903  108327 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.615146  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.615274  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.615905  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.616065  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.616291  108327 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 04:26:58.616437  108327 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 04:26:58.616542  108327 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.616708  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.616728  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.617748  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.618682  108327 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 04:26:58.618765  108327 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 04:26:58.618820  108327 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.619049  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.619076  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.619803  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.620178  108327 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 04:26:58.620246  108327 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 04:26:58.620362  108327 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.620685  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.620704  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.621438  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.621595  108327 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 04:26:58.621630  108327 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 04:26:58.621750  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.621972  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.622018  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.622554  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.623485  108327 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 04:26:58.623623  108327 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 04:26:58.623663  108327 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.623826  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.623847  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.624673  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.625418  108327 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 04:26:58.625568  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.625746  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.625762  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.625820  108327 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 04:26:58.626738  108327 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 04:26:58.626906  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.627145  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.627171  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.627202  108327 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 04:26:58.628099  108327 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 04:26:58.628232  108327 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 04:26:58.628257  108327 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.628731  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.628866  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.629613  108327 watch_cache.go:405] Replace watchCache (rev: 58818) 
I0920 04:26:58.629957  108327 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 04:26:58.630145  108327 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 04:26:58.630315  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.630636  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.630768  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.630872  108327 watch_cache.go:405] Replace watchCache (rev: 58819) 
I0920 04:26:58.631805  108327 watch_cache.go:405] Replace watchCache (rev: 58819) 
I0920 04:26:58.632254  108327 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 04:26:58.632291  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.632507  108327 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 04:26:58.632520  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.632812  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.632909  108327 watch_cache.go:405] Replace watchCache (rev: 58819) 
I0920 04:26:58.633765  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.633905  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.634203  108327 watch_cache.go:405] Replace watchCache (rev: 58819) 
I0920 04:26:58.635369  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.635675  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.635786  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.636766  108327 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 04:26:58.636793  108327 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 04:26:58.636936  108327 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 04:26:58.637255  108327 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.637496  108327 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.638162  108327 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.638206  108327 watch_cache.go:405] Replace watchCache (rev: 58819) 
I0920 04:26:58.639100  108327 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.639842  108327 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.640611  108327 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.641070  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.641324  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.641658  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.642260  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.643482  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.643635  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.644172  108327 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.644383  108327 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.644909  108327 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.645135  108327 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.645852  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.646063  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.646533  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.646678  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.646929  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.647070  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.647234  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.648260  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.648575  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.651590  108327 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.652416  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.652779  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.653093  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.653883  108327 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.654277  108327 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.654999  108327 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.655804  108327 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.656461  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.657296  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.657711  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.657973  108327 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 04:26:58.658069  108327 master.go:461] Enabling API group "authentication.k8s.io".
I0920 04:26:58.658150  108327 master.go:461] Enabling API group "authorization.k8s.io".
I0920 04:26:58.658453  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.658887  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.659016  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.660017  108327 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:26:58.660124  108327 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:26:58.660423  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.660754  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.660805  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.662093  108327 watch_cache.go:405] Replace watchCache (rev: 58819) 
I0920 04:26:58.663370  108327 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:26:58.663565  108327 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:26:58.663819  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.664739  108327 watch_cache.go:405] Replace watchCache (rev: 58819) 
I0920 04:26:58.665615  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.665849  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.667469  108327 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:26:58.667628  108327 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:26:58.667709  108327 master.go:461] Enabling API group "autoscaling".
I0920 04:26:58.667949  108327 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.668176  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.668199  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.669187  108327 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 04:26:58.669278  108327 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 04:26:58.669366  108327 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.669566  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.669596  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.670045  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.670861  108327 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 04:26:58.670888  108327 master.go:461] Enabling API group "batch".
I0920 04:26:58.670964  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.671048  108327 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.671270  108327 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 04:26:58.671273  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.671363  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.672047  108327 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 04:26:58.672081  108327 master.go:461] Enabling API group "certificates.k8s.io".
I0920 04:26:58.672196  108327 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 04:26:58.672193  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.672409  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.672431  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.672597  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.673122  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.673357  108327 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:26:58.673500  108327 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:26:58.673502  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.673838  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.673859  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.674728  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.674950  108327 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:26:58.674966  108327 master.go:461] Enabling API group "coordination.k8s.io".
I0920 04:26:58.674977  108327 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 04:26:58.675048  108327 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:26:58.675126  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.675298  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.675313  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.675793  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.676587  108327 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:26:58.676612  108327 master.go:461] Enabling API group "extensions".
I0920 04:26:58.676625  108327 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:26:58.676723  108327 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.676884  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.676898  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.677541  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.678203  108327 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 04:26:58.678230  108327 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 04:26:58.678370  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.678593  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.678616  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.679210  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.679332  108327 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:26:58.679351  108327 master.go:461] Enabling API group "networking.k8s.io".
I0920 04:26:58.679379  108327 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.679417  108327 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:26:58.679555  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.679574  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.680135  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.680876  108327 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 04:26:58.680896  108327 master.go:461] Enabling API group "node.k8s.io".
I0920 04:26:58.681018  108327 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.681224  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.681242  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.681317  108327 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 04:26:58.682125  108327 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 04:26:58.682268  108327 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 04:26:58.682314  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.682577  108327 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.682911  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.683749  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.683623  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.685040  108327 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 04:26:58.685108  108327 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 04:26:58.685183  108327 master.go:461] Enabling API group "policy".
I0920 04:26:58.685308  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.685579  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.685601  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.686225  108327 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:26:58.686313  108327 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:26:58.686366  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.686731  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.686737  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.686757  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.687355  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.688539  108327 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:26:58.688565  108327 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:26:58.688596  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.688815  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.688838  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.689649  108327 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:26:58.689743  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.689844  108327 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:26:58.689913  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.690349  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.690453  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.690602  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.691054  108327 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:26:58.691083  108327 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:26:58.691275  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.691559  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.691689  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.692118  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.692359  108327 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:26:58.692383  108327 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:26:58.692506  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.692703  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.692732  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.693367  108327 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:26:58.693434  108327 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:26:58.693436  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.693627  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.693630  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.693668  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.694728  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.694750  108327 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:26:58.694835  108327 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:26:58.694892  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.695073  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.695096  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.695612  108327 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:26:58.695670  108327 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 04:26:58.695802  108327 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:26:58.695807  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.696520  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.697405  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.697619  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.697651  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.698354  108327 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:26:58.698682  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.698994  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.699159  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.698464  108327 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:26:58.699755  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.700014  108327 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:26:58.700028  108327 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 04:26:58.700117  108327 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 04:26:58.700247  108327 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:26:58.700249  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.700443  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.700465  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.701102  108327 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:26:58.701130  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.701209  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.701341  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.701353  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.701353  108327 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:26:58.702340  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.702697  108327 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:26:58.702736  108327 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:26:58.703018  108327 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.703311  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.703517  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.703751  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.704527  108327 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 04:26:58.704571  108327 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.704791  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.704817  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.704890  108327 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 04:26:58.705815  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.706260  108327 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 04:26:58.706431  108327 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 04:26:58.706472  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.706675  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.706699  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.707083  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.707462  108327 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:26:58.707617  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.707642  108327 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:26:58.707794  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.707810  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.708632  108327 watch_cache.go:405] Replace watchCache (rev: 58820) 
I0920 04:26:58.708722  108327 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:26:58.708749  108327 master.go:461] Enabling API group "storage.k8s.io".
I0920 04:26:58.708911  108327 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:26:58.708921  108327 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.709190  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.709289  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.709598  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.710666  108327 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 04:26:58.710826  108327 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.711024  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.711048  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.711154  108327 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 04:26:58.712097  108327 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 04:26:58.712158  108327 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 04:26:58.712447  108327 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.712851  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.713039  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.713039  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.713591  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.713988  108327 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 04:26:58.714181  108327 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 04:26:58.714176  108327 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.714361  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.714383  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.715100  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.715566  108327 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 04:26:58.715657  108327 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 04:26:58.715867  108327 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.716029  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.716048  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.716713  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.716734  108327 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 04:26:58.716718  108327 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 04:26:58.716759  108327 master.go:461] Enabling API group "apps".
I0920 04:26:58.716790  108327 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.716954  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.716977  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.717754  108327 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 04:26:58.717795  108327 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.717851  108327 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 04:26:58.717963  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.718010  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.718274  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.718686  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.719240  108327 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 04:26:58.719267  108327 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 04:26:58.719276  108327 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.719499  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.719521  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.720176  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.720201  108327 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 04:26:58.720152  108327 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 04:26:58.720252  108327 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.720508  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.720534  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.721171  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.721314  108327 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 04:26:58.721337  108327 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 04:26:58.721370  108327 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.721385  108327 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 04:26:58.721691  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:58.721715  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:58.722892  108327 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:26:58.722918  108327 master.go:461] Enabling API group "events.k8s.io".
I0920 04:26:58.722987  108327 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:26:58.723181  108327 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.723443  108327 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.723781  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.723771  108327 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.724008  108327 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.724256  108327 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.724339  108327 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.724539  108327 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.724618  108327 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.724686  108327 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.724714  108327 watch_cache.go:405] Replace watchCache (rev: 58821) 
I0920 04:26:58.724748  108327 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.726076  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.726753  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.728080  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.728453  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.729802  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.730294  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.731210  108327 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.731697  108327 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.732685  108327 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.733066  108327 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:26:58.733468  108327 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 04:26:58.734335  108327 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.734635  108327 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.735052  108327 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.735955  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.736652  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.737723  108327 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.738282  108327 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.739110  108327 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.739875  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.740314  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.740960  108327 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:26:58.741121  108327 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 04:26:58.741996  108327 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.742273  108327 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.742872  108327 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.747705  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.748316  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.748986  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.749711  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.750240  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.750655  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.751228  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.751841  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:26:58.751929  108327 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0920 04:26:58.752445  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.752984  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:26:58.753034  108327 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0920 04:26:58.753591  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.754086  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.754287  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.754933  108327 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.755323  108327 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.755750  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.756216  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:26:58.756270  108327 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0920 04:26:58.757038  108327 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.757631  108327 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.757869  108327 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.758506  108327 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.758780  108327 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.759077  108327 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.759820  108327 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.760124  108327 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.760370  108327 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.760932  108327 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.761135  108327 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.761342  108327 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:26:58.761433  108327 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0920 04:26:58.761447  108327 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0920 04:26:58.762059  108327 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.762647  108327 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.763344  108327 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.763787  108327 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.764533  108327 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a3a331f8-5bf8-422e-9003-d45c96363f0d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:26:58.767348  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:58.767480  108327 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0920 04:26:58.767508  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:58.767520  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:58.767529  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:58.767537  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:58.767575  108327 httplog.go:90] GET /healthz: (367.825µs) 0 [Go-http-client/1.1 127.0.0.1:41736]
I0920 04:26:58.769343  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.254059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41736]
I0920 04:26:58.771885  108327 httplog.go:90] GET /api/v1/services: (926.431µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41736]
I0920 04:26:58.775707  108327 httplog.go:90] GET /api/v1/services: (902.985µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41736]
I0920 04:26:58.777739  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:58.777765  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:58.777773  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:58.777779  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:58.777827  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:58.777858  108327 httplog.go:90] GET /healthz: (203.003µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41736]
I0920 04:26:58.779794  108327 httplog.go:90] GET /api/v1/services: (1.314596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41736]
I0920 04:26:58.780543  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41738]
I0920 04:26:58.781531  108327 httplog.go:90] GET /api/v1/services: (2.361222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:58.782946  108327 httplog.go:90] POST /api/v1/namespaces: (1.943665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41738]
I0920 04:26:58.784543  108327 httplog.go:90] GET /api/v1/namespaces/kube-public: (916.103µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:58.786672  108327 httplog.go:90] POST /api/v1/namespaces: (1.667797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:58.788255  108327 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.08828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:58.791018  108327 httplog.go:90] POST /api/v1/namespaces: (2.176761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:58.848449  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:58.848711  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:58.848819  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:58.851501  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:58.851539  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:58.853009  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:58.853354  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:58.868429  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:58.868461  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:58.868474  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:58.868480  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:58.868488  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:58.868521  108327 httplog.go:90] GET /healthz: (261.674µs) 0 [Go-http-client/1.1 127.0.0.1:41740]
I0920 04:26:58.878580  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:58.878619  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:58.878628  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:58.878634  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:58.878640  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:58.878675  108327 httplog.go:90] GET /healthz: (241.38µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:58.969024  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:58.969056  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:58.969065  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:58.969071  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:58.969077  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:58.969108  108327 httplog.go:90] GET /healthz: (232.181µs) 0 [Go-http-client/1.1 127.0.0.1:41740]
I0920 04:26:58.978685  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:58.978720  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:58.978729  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:58.978735  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:58.978741  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:58.978810  108327 httplog.go:90] GET /healthz: (368.721µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.068462  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.068508  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.068519  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.068526  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.068531  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.068563  108327 httplog.go:90] GET /healthz: (286.679µs) 0 [Go-http-client/1.1 127.0.0.1:41740]
I0920 04:26:59.078477  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.078515  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.078526  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.078535  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.078544  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.078593  108327 httplog.go:90] GET /healthz: (259.777µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.168568  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.168846  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.168962  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.169040  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.169137  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.169344  108327 httplog.go:90] GET /healthz: (955.152µs) 0 [Go-http-client/1.1 127.0.0.1:41740]
I0920 04:26:59.178564  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.178608  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.178645  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.178655  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.178664  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.178697  108327 httplog.go:90] GET /healthz: (265.681µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.268325  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.268363  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.268376  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.268403  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.268413  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.268457  108327 httplog.go:90] GET /healthz: (276.215µs) 0 [Go-http-client/1.1 127.0.0.1:41740]
I0920 04:26:59.278690  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.278722  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.278733  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.278743  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.278821  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.278862  108327 httplog.go:90] GET /healthz: (358.823µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.368404  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.368443  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.368453  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.368459  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.368465  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.368502  108327 httplog.go:90] GET /healthz: (272.479µs) 0 [Go-http-client/1.1 127.0.0.1:41740]
I0920 04:26:59.378589  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.378624  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.378634  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.378640  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.378646  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.378683  108327 httplog.go:90] GET /healthz: (290.79µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.468527  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.468574  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.468587  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.468596  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.468604  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.468647  108327 httplog.go:90] GET /healthz: (327.19µs) 0 [Go-http-client/1.1 127.0.0.1:41740]
I0920 04:26:59.478522  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.478555  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.478564  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.478570  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.478604  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.478639  108327 httplog.go:90] GET /healthz: (296.659µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.568510  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.568555  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.568565  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.568571  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.568578  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.568617  108327 httplog.go:90] GET /healthz: (283.716µs) 0 [Go-http-client/1.1 127.0.0.1:41740]
I0920 04:26:59.578587  108327 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:26:59.578622  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.578632  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.578638  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.578644  108327 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.578669  108327 httplog.go:90] GET /healthz: (238.032µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.607065  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:26:59.607300  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:26:59.669589  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.669658  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.669668  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.669677  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.669764  108327 httplog.go:90] GET /healthz: (1.417312ms) 0 [Go-http-client/1.1 127.0.0.1:41740]
I0920 04:26:59.679227  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.679311  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.679321  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.679330  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.679437  108327 httplog.go:90] GET /healthz: (1.022924ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.768785  108327 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.517102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.768926  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.656351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41736]
I0920 04:26:59.769557  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.769578  108327 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:26:59.769587  108327 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:26:59.769597  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:26:59.769627  108327 httplog.go:90] GET /healthz: (923.267µs) 0 [Go-http-client/1.1 127.0.0.1:41764]
I0920 04:26:59.769877  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (842.933µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41740]
I0920 04:26:59.770868  108327 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.538382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41736]
I0920 04:26:59.770914  108327 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.531882ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.771562  108327 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0920 04:26:59.772194  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (869.825µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41764]
I0920 04:26:59.773420  108327 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.176395ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.773586  108327 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.930464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.774618  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.299864ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41764]
I0920 04:26:59.775380  108327 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.63195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.775903  108327 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0920 04:26:59.775927  108327 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0920 04:26:59.776223  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (867.872µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41764]
I0920 04:26:59.777473  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (811.729µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.778547  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (724.148µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.779317  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.779335  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:26:59.779356  108327 httplog.go:90] GET /healthz: (903.146µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.779683  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (707.535µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.780906  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (924.294µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.782036  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (854.687µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.783046  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (682.466µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.784976  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.460741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.785351  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0920 04:26:59.786344  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (670.508µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.788083  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.396791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.788286  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0920 04:26:59.789237  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (723.935µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.792369  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.742799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.792564  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0920 04:26:59.793979  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.031165ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.796113  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.659929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.796376  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0920 04:26:59.798163  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.582291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.800686  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.125519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.800975  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0920 04:26:59.804054  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.948796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.806338  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.542249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.806545  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0920 04:26:59.808493  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.7634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.810461  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.810685  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0920 04:26:59.811703  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (809.608µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.817527  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.893305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.817782  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0920 04:26:59.819884  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.779753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.824972  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.144507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.825270  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0920 04:26:59.826535  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (954.97µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.846540  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.928358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.846883  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0920 04:26:59.848630  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:59.848748  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.506038ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.848878  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:59.848979  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:59.851266  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.958718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.851536  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0920 04:26:59.851721  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:59.851766  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:59.852811  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (941.055µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.853170  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:59.853735  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:26:59.856703  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.209242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.857019  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0920 04:26:59.858535  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.141469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.860897  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.941631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.861194  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0920 04:26:59.862332  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (797.534µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.864361  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.864635  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0920 04:26:59.866047  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.116368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.867894  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.444207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.868083  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0920 04:26:59.868823  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.868849  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:26:59.868892  108327 httplog.go:90] GET /healthz: (788.639µs) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:26:59.869110  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (806.836µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.871454  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.89436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.871774  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0920 04:26:59.873074  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.064552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.875345  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.742118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.875571  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0920 04:26:59.877667  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.917639ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.880252  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.064378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.880534  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.880550  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 04:26:59.880554  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:26:59.880590  108327 httplog.go:90] GET /healthz: (2.279299ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.881833  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.106184ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.886204  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.525055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.886489  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0920 04:26:59.888590  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.71046ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.890914  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.862266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.891184  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0920 04:26:59.892328  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (959.37µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.894351  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.894748  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0920 04:26:59.895884  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (846.044µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.899662  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.441296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.899933  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0920 04:26:59.901046  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (878.49µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.903943  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.326651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.904142  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0920 04:26:59.905217  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (848.494µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.908871  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.802028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.914674  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0920 04:26:59.916106  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.119481ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.918463  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.706312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.918723  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0920 04:26:59.919910  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (870.824µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.921916  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.557942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.922311  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0920 04:26:59.923618  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.077942ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.926321  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.225637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.926598  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0920 04:26:59.928169  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.316106ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.931430  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.5218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.931869  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 04:26:59.933094  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (968.742µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.934989  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.337846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.935179  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 04:26:59.936705  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.348144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.943056  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.933505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.943900  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 04:26:59.945321  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.020003ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.947983  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.930305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.948402  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 04:26:59.949437  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (745.047µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.951220  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.38602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.951556  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 04:26:59.952775  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.010143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.955343  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.867237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.955615  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 04:26:59.956959  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.067244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.958932  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.959155  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 04:26:59.960180  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (799.812µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.962357  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.681217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.962741  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 04:26:59.965321  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (2.405674ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.967737  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.967991  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 04:26:59.969207  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (988.622µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.969251  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.969272  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:26:59.969309  108327 httplog.go:90] GET /healthz: (1.070383ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:26:59.971558  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.767848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.971801  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 04:26:59.972946  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (947.811µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.975371  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.029463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.975700  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0920 04:26:59.976915  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (957.169µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.979147  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:26:59.979221  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:26:59.979318  108327 httplog.go:90] GET /healthz: (1.071087ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:26:59.979758  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.361511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.980051  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 04:26:59.981083  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (862.191µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.982937  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.490122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.983174  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0920 04:26:59.984264  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (921.358µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.986336  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.386667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.986555  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 04:26:59.988028  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.180358ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.991229  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.66225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.991544  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 04:26:59.992897  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.054001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.994998  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.613071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.995291  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 04:26:59.997182  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.688902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.999439  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.349063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:26:59.999616  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 04:27:00.000643  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (832.115µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.002804  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.582821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.002986  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 04:27:00.004528  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.303787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.007441  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.48737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.007672  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0920 04:27:00.009196  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.240593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.010928  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.31079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.011170  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 04:27:00.016942  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (5.56302ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.019869  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.932103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.020186  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0920 04:27:00.021690  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.193542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.026700  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.424712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.026937  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 04:27:00.028560  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.018769ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.030923  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.78891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.031195  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 04:27:00.032612  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.127741ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.034625  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.602433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.035055  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 04:27:00.036482  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.042203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.038943  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.829955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.039141  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 04:27:00.049110  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.621292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.070206  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.070248  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.070294  108327 httplog.go:90] GET /healthz: (2.063714ms) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:27:00.070813  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.25075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.071069  108327 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 04:27:00.079455  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.079488  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.079524  108327 httplog.go:90] GET /healthz: (990.184µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.088776  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.281673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.111045  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.512446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.111648  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0920 04:27:00.128601  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.098701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.149662  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.049167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.150044  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0920 04:27:00.169955  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.169991  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.170035  108327 httplog.go:90] GET /healthz: (1.839038ms) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:27:00.174718  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (6.621043ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.180144  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.180291  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.180542  108327 httplog.go:90] GET /healthz: (1.940099ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.190431  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.805576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.190651  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0920 04:27:00.209241  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.745772ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.230019  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.450704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.230307  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0920 04:27:00.249102  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.530224ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.269253  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.269513  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.269556  108327 httplog.go:90] GET /healthz: (1.311036ms) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:27:00.269992  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.186784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.270321  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0920 04:27:00.279222  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.279252  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.279312  108327 httplog.go:90] GET /healthz: (898.305µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.288954  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.349239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.311356  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.365308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.311771  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 04:27:00.328581  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.082253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.349963  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.230512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.350254  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0920 04:27:00.369272  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.679609ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.369783  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.369981  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.370223  108327 httplog.go:90] GET /healthz: (2.00891ms) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:27:00.379606  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.379649  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.379707  108327 httplog.go:90] GET /healthz: (1.245317ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.390262  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.754553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.390757  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0920 04:27:00.410823  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (3.312616ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.430267  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.748014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.430726  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0920 04:27:00.448897  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.369856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.469729  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.469776  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.469823  108327 httplog.go:90] GET /healthz: (1.566619ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:00.470441  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.70403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.470649  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0920 04:27:00.479314  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.479347  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.479420  108327 httplog.go:90] GET /healthz: (971.836µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.488718  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.317634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.509899  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.308785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.510402  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 04:27:00.529081  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.512059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.549609  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.076878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.549981  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 04:27:00.568915  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.421028ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.569116  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.569140  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.569176  108327 httplog.go:90] GET /healthz: (970.09µs) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:00.579409  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.579445  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.579485  108327 httplog.go:90] GET /healthz: (1.014191ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.590065  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.418282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.590289  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 04:27:00.608918  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.369231ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.629896  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.309703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.630126  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 04:27:00.648958  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.377632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.669293  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.669341  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.669380  108327 httplog.go:90] GET /healthz: (1.20798ms) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:27:00.670373  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.830199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.670617  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 04:27:00.679348  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.679378  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.679430  108327 httplog.go:90] GET /healthz: (1.032319ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.688871  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.345964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.709904  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.282775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.710487  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 04:27:00.728949  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.274599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.749799  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.296715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.750284  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 04:27:00.769170  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.468469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.769295  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.769326  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.769532  108327 httplog.go:90] GET /healthz: (1.233087ms) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:27:00.779572  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.779603  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.779650  108327 httplog.go:90] GET /healthz: (1.276067ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.789934  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.43699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.790186  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 04:27:00.809034  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.423148ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.830022  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.363762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.830567  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 04:27:00.848771  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:00.848822  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.347564ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.849055  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:00.849119  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:00.851894  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:00.852003  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:00.853336  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:00.853907  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:00.869778  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.219688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.869788  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.870085  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.870297  108327 httplog.go:90] GET /healthz: (1.838421ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:00.870537  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 04:27:00.879328  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.879511  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.879730  108327 httplog.go:90] GET /healthz: (1.389251ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.888921  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.387205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.910701  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.93878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.911019  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0920 04:27:00.929537  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.394198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.949902  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.308651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.950297  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 04:27:00.969556  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.969734  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.969993  108327 httplog.go:90] GET /healthz: (1.591693ms) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:27:00.970449  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.990729ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:00.979365  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:00.979418  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:00.979460  108327 httplog.go:90] GET /healthz: (1.04369ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.989943  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.416424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:00.990226  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0920 04:27:01.009639  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.9179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.030382  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.882216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.030811  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 04:27:01.051327  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (3.654325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.069453  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.069489  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.069525  108327 httplog.go:90] GET /healthz: (1.294385ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:01.070692  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.218364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.071175  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 04:27:01.079577  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.079604  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.079637  108327 httplog.go:90] GET /healthz: (1.34508ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.088808  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.214613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.109938  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.392941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.110278  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 04:27:01.128937  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.432737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.149208  108327 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0920 04:27:01.149234  108327 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0920 04:27:01.149358  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"9505eeec-85bc-4cde-add2-4b850a0b0efb", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0920 04:27:01.152975  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (3.330261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:01.152993  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.181664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.153347  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 04:27:01.169118  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.169166  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.169210  108327 httplog.go:90] GET /healthz: (871.041µs) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:01.170042  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.524171ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.179297  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.179328  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.179378  108327 httplog.go:90] GET /healthz: (901.666µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.189419  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.899461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.189855  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 04:27:01.208943  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.385689ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.230295  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.776935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.230633  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0920 04:27:01.249368  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.688431ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.269368  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.269647  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.269905  108327 httplog.go:90] GET /healthz: (1.677792ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:01.270312  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.766796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.270536  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 04:27:01.279947  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.279981  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.280024  108327 httplog.go:90] GET /healthz: (1.584342ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.288781  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.26918ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.309236  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.802707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.309572  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0920 04:27:01.329200  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.558608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.350243  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.61962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.350660  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 04:27:01.368977  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.3775ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.369160  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.369188  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.369234  108327 httplog.go:90] GET /healthz: (1.030954ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:01.379895  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.379934  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.379991  108327 httplog.go:90] GET /healthz: (1.466209ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.389857  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.218819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.390125  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 04:27:01.408989  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.421889ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.430099  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.5598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.430423  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 04:27:01.448828  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.30149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.469698  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.469738  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.469772  108327 httplog.go:90] GET /healthz: (985.275µs) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:27:01.470096  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.528079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.470597  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 04:27:01.479753  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.479782  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.479817  108327 httplog.go:90] GET /healthz: (1.228255ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.489007  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.4855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.509820  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.166545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.510091  108327 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 04:27:01.529349  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.607024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.531974  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.734223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.549900  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.030019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.550141  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0920 04:27:01.569073  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.534007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:01.569117  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.569138  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.569174  108327 httplog.go:90] GET /healthz: (1.01301ms) 0 [Go-http-client/1.1 127.0.0.1:41766]
I0920 04:27:01.570946  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.288712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.579690  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.579729  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.579898  108327 httplog.go:90] GET /healthz: (1.460838ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.590105  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.519245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.590669  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 04:27:01.609799  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.689693ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.614909  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.119861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.629910  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.306334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.630316  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 04:27:01.649559  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.500243ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.651884  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.677515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.669277  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.669307  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.669348  108327 httplog.go:90] GET /healthz: (1.131126ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:01.670141  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.565685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.670451  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 04:27:01.679609  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.679766  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.679904  108327 httplog.go:90] GET /healthz: (1.416389ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.689033  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.51285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.691800  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.907615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.710180  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.434378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.710655  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 04:27:01.729576  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.882907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.734633  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.970008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.750122  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.43788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.750456  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 04:27:01.769802  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.463437ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.770135  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.770172  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.770222  108327 httplog.go:90] GET /healthz: (1.909566ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:01.772681  108327 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.04972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.779791  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.779841  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.779937  108327 httplog.go:90] GET /healthz: (1.390562ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.790018  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.427308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.790318  108327 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 04:27:01.809984  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.821937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.812096  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.614118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.829981  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.43205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.830475  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0920 04:27:01.849067  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.440898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.849149  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:01.849315  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:01.849684  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:01.850991  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.421496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.852056  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:01.852212  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:01.853478  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:01.854115  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:01.869990  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.870027  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.870087  108327 httplog.go:90] GET /healthz: (1.852618ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:01.870643  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.950031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.870874  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 04:27:01.879580  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.879728  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.879963  108327 httplog.go:90] GET /healthz: (1.462312ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.889209  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.620853ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.892153  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.179309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.909947  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.319888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.910680  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 04:27:01.929261  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.67384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.931215  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.346962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.949936  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.266554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.950216  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 04:27:01.969364  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.969422  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.969466  108327 httplog.go:90] GET /healthz: (1.229962ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:01.970017  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.337865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.972076  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.457238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.979488  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:01.979546  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:01.979594  108327 httplog.go:90] GET /healthz: (1.234363ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.989886  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.332995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:01.990362  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 04:27:02.008761  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.167429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.010488  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.218541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.030202  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.662316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.030627  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 04:27:02.048797  108327 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.109191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.050774  108327 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.425407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.069953  108327 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:27:02.069991  108327 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:27:02.070033  108327 httplog.go:90] GET /healthz: (1.792625ms) 0 [Go-http-client/1.1 127.0.0.1:41768]
I0920 04:27:02.070663  108327 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.146937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.071426  108327 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 04:27:02.079609  108327 httplog.go:90] GET /healthz: (1.200396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.081518  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.373359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.083777  108327 httplog.go:90] POST /api/v1/namespaces: (1.721189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.085580  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.281581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.089995  108327 httplog.go:90] POST /api/v1/namespaces/default/services: (3.719391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.091330  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (966.286µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.093934  108327 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.150238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.169619  108327 httplog.go:90] GET /healthz: (1.30644ms) 200 [Go-http-client/1.1 127.0.0.1:41766]
W0920 04:27:02.171016  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171091  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171121  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171131  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171222  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171233  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171242  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171250  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171261  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171288  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171297  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.171377  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:27:02.171421  108327 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0920 04:27:02.171430  108327 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 04:27:02.171660  108327 shared_informer.go:197] Waiting for caches to sync for scheduler
I0920 04:27:02.171834  108327 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 04:27:02.171846  108327 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 04:27:02.172824  108327 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (699.727µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41766]
I0920 04:27:02.173832  108327 get.go:251] Starting watch for /api/v1/pods, rv=58819 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m5s
I0920 04:27:02.271862  108327 shared_informer.go:227] caches populated
I0920 04:27:02.271903  108327 shared_informer.go:204] Caches are synced for scheduler 
I0920 04:27:02.272353  108327 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272370  108327 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272404  108327 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272423  108327 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272427  108327 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272541  108327 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272736  108327 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272751  108327 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272824  108327 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272836  108327 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272979  108327 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.272994  108327 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.273130  108327 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.273145  108327 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.273545  108327 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.273551  108327 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.273568  108327 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (655.804µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:02.273571  108327 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.274146  108327 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (617.187µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0920 04:27:02.274561  108327 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (302.811µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0920 04:27:02.274601  108327 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (417.021µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41768]
I0920 04:27:02.274560  108327 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (457.401µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42050]
I0920 04:27:02.273562  108327 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.275059  108327 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=58818 labels= fields= timeout=7m58s
I0920 04:27:02.275077  108327 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (408.176µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42048]
I0920 04:27:02.275318  108327 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (407.529µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42056]
I0920 04:27:02.275340  108327 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=58820 labels= fields= timeout=7m38s
I0920 04:27:02.275384  108327 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=58819 labels= fields= timeout=8m40s
I0920 04:27:02.275597  108327 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=58820 labels= fields= timeout=6m40s
I0920 04:27:02.275732  108327 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=58821 labels= fields= timeout=9m45s
I0920 04:27:02.275851  108327 get.go:251] Starting watch for /api/v1/nodes, rv=58819 labels= fields= timeout=7m24s
I0920 04:27:02.276133  108327 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=58820 labels= fields= timeout=8m16s
I0920 04:27:02.276148  108327 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (455.064µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42056]
I0920 04:27:02.276812  108327 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.276834  108327 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.276857  108327 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (406.674µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42056]
I0920 04:27:02.277003  108327 get.go:251] Starting watch for /api/v1/services, rv=59119 labels= fields= timeout=6m29s
I0920 04:27:02.277434  108327 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=58821 labels= fields= timeout=7m33s
I0920 04:27:02.277508  108327 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (401.936µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42062]
I0920 04:27:02.278374  108327 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=58818 labels= fields= timeout=5m1s
I0920 04:27:02.372256  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372353  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372360  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372364  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372368  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372372  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372377  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372381  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372384  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372443  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372459  108327 shared_informer.go:227] caches populated
I0920 04:27:02.372580  108327 node_lifecycle_controller.go:327] Sending events to api server.
I0920 04:27:02.372712  108327 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0920 04:27:02.372795  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:27:02.372959  108327 taint_manager.go:162] Sending events to api server.
I0920 04:27:02.373108  108327 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 04:27:02.373163  108327 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0920 04:27:02.373183  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:27:02.373209  108327 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:27:02.373359  108327 node_lifecycle_controller.go:488] Starting node controller
I0920 04:27:02.373414  108327 shared_informer.go:197] Waiting for caches to sync for taint
I0920 04:27:02.380453  108327 httplog.go:90] POST /api/v1/namespaces: (6.259089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42066]
I0920 04:27:02.380793  108327 node_lifecycle_controller.go:327] Sending events to api server.
I0920 04:27:02.380883  108327 node_lifecycle_controller.go:359] Controller is using taint based evictions.
I0920 04:27:02.380971  108327 taint_manager.go:162] Sending events to api server.
I0920 04:27:02.381037  108327 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 04:27:02.381061  108327 node_lifecycle_controller.go:465] Controller will taint node by condition.
I0920 04:27:02.381113  108327 node_lifecycle_controller.go:488] Starting node controller
I0920 04:27:02.381135  108327 shared_informer.go:197] Waiting for caches to sync for taint
I0920 04:27:02.381337  108327 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.381357  108327 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.382820  108327 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (1.104145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42066]
I0920 04:27:02.384016  108327 get.go:251] Starting watch for /api/v1/namespaces, rv=59129 labels= fields= timeout=5m57s
I0920 04:27:02.481418  108327 shared_informer.go:227] caches populated
I0920 04:27:02.481481  108327 shared_informer.go:227] caches populated
I0920 04:27:02.481487  108327 shared_informer.go:227] caches populated
I0920 04:27:02.481775  108327 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.481957  108327 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.481845  108327 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.482011  108327 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.481855  108327 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.482038  108327 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0920 04:27:02.483555  108327 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (642.094µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42072]
I0920 04:27:02.483556  108327 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (454.282µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42070]
I0920 04:27:02.484294  108327 get.go:251] Starting watch for /api/v1/pods, rv=58819 labels= fields= timeout=6m50s
I0920 04:27:02.484772  108327 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (566.277µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42074]
I0920 04:27:02.485160  108327 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=58820 labels= fields= timeout=7m54s
I0920 04:27:02.485230  108327 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=58821 labels= fields= timeout=6m0s
I0920 04:27:02.573579  108327 shared_informer.go:227] caches populated
I0920 04:27:02.573614  108327 shared_informer.go:204] Caches are synced for taint 
I0920 04:27:02.573704  108327 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 04:27:02.581378  108327 shared_informer.go:227] caches populated
I0920 04:27:02.581572  108327 shared_informer.go:204] Caches are synced for taint 
I0920 04:27:02.581755  108327 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 04:27:02.582092  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582114  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582121  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582127  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582132  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582138  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582145  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582154  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582160  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582166  108327 shared_informer.go:227] caches populated
I0920 04:27:02.582173  108327 shared_informer.go:227] caches populated
I0920 04:27:02.585786  108327 httplog.go:90] POST /api/v1/nodes: (2.803478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.586076  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:27:02.586101  108327 taint_manager.go:438] Updating known taints on node node-0: []
I0920 04:27:02.586077  108327 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0920 04:27:02.586111  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:27:02.586129  108327 taint_manager.go:438] Updating known taints on node node-0: []
I0920 04:27:02.588174  108327 httplog.go:90] POST /api/v1/nodes: (1.662647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.588587  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:27:02.588618  108327 taint_manager.go:438] Updating known taints on node node-1: []
I0920 04:27:02.588646  108327 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0920 04:27:02.588718  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:27:02.588733  108327 taint_manager.go:438] Updating known taints on node node-1: []
I0920 04:27:02.590615  108327 httplog.go:90] POST /api/v1/nodes: (1.704877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.590905  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:02.591021  108327 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:27:02.590944  108327 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0920 04:27:02.596119  108327 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/pods: (4.637699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.596526  108327 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0"}
I0920 04:27:02.596565  108327 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0"}
I0920 04:27:02.597100  108327 scheduling_queue.go:830] About to try and schedule pod taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0
I0920 04:27:02.597125  108327 scheduler.go:530] Attempting to schedule pod: taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0
I0920 04:27:02.597756  108327 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0", node "node-2"
I0920 04:27:02.597782  108327 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0", node "node-2": all PVCs bound and nothing to do
I0920 04:27:02.597854  108327 factory.go:606] Attempting to bind testpod-0 to node-2
I0920 04:27:02.590918  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:02.598484  108327 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:27:02.600151  108327 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/pods/testpod-0/binding: (1.930739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.600409  108327 scheduler.go:662] pod taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0 is bound successfully on node "node-2", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0920 04:27:02.601602  108327 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0"}
I0920 04:27:02.601835  108327 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0"}
I0920 04:27:02.602892  108327 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/events: (2.018889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.699049  108327 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/pods/testpod-0: (2.119154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.701836  108327 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/pods/testpod-0: (1.563611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.703548  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.129377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.706225  108327 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.166824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.707200  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (388.543µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.707947  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (699.05µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:02.711012  108327 store.go:362] GuaranteedUpdate of /a3a331f8-5bf8-422e-9003-d45c96363f0d/minions/node-2 failed because of a conflict, going to retry
I0920 04:27:02.711038  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.140994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:02.712242  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.401902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:02.712303  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:02.706708765 +0000 UTC m=+291.005476324,}] Taint to Node node-2
I0920 04:27:02.712341  108327 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 04:27:02.712520  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:02.70657632 +0000 UTC m=+291.005343901,}] Taint to Node node-2
I0920 04:27:02.712552  108327 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 04:27:02.808855  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.87174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:02.849353  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:02.849608  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:02.849832  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:02.852316  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:02.852464  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:02.853697  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:02.854282  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:02.908422  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.397452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.008802  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.819328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.108618  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.614183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.209198  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.234706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.274732  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.275465  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.275598  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.275936  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.276723  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.277995  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.308874  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.862505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.409238  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.170263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.484158  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.508834  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.804228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.608994  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.96614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.709124  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.034362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.809323  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.281106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:03.849598  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.849762  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.849983  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.852602  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.852641  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.853868  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.854446  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:03.908901  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.897439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.008761  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.761859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.108785  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.8424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.209085  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.050052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.274882  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.275590  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.275769  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.276059  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.276904  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.278164  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.308723  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.732623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.420972  108327 httplog.go:90] GET /api/v1/nodes/node-2: (14.001948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.484344  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.519585  108327 httplog.go:90] GET /api/v1/nodes/node-2: (12.618517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.608595  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.630963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.708816  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.832924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.808835  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.798526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:04.849774  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.850010  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.850139  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.852777  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.852792  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.854031  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.854613  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:04.908671  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.647268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.008923  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.903541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.109370  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.320508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.209340  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.267828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.275143  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.275770  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.276073  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.276378  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.277085  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.278429  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.308742  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.761996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.408834  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.83966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.484868  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.508732  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.792173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.608719  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.715456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.708829  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.824979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.749569  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.522331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:05.751632  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.481552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:05.753087  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (975.226µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:05.809164  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.083818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:05.850217  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.850292  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.850331  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.853085  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.853084  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.854219  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.854822  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:05.909312  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.172446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.009003  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.956148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.108742  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.691967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.208885  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.786063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.275320  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.275966  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.276241  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.276483  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.277274  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.278735  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.309105  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.138389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.408920  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.89471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.485059  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.508784  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.800079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.609004  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.922755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.709079  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.100935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.809190  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.093039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:06.850510  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.850564  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.850584  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.853258  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.853455  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.854432  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.854993  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:06.908610  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.605804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.009001  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.887393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.108592  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.620952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.208835  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.850112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.276100  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.276487  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.276994  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.277020  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.277524  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.278882  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.309202  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.179956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.410883  108327 httplog.go:90] GET /api/v1/nodes/node-2: (3.860797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.485268  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.508595  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.640847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.573916  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 04:27:07.574015  108327 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
I0920 04:27:07.574134  108327 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 04:27:07.574166  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 04:27:07.574174  108327 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 04:27:07.574186  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 04:27:07.574192  108327 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
W0920 04:27:07.574237  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 04:27:07.574284  108327 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 04:27:07.57426591 +0000 UTC m=+295.873033463. Adding it to the Taint queue.
W0920 04:27:07.574320  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 04:27:07.574352  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
I0920 04:27:07.574381  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"05c0d0f6-47a4-42e5-853b-2b54522aa169", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 04:27:07.574448  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"3d4c3db7-aa91-47f2-9f17-67273a75878c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 04:27:07.574458  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"98377c17-ec0f-4818-a8bb-b2d06697a430", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 04:27:07.574428  108327 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 04:27:07.577019  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (2.279716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.579471  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.937829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.581585  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.381297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.581869  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 04:27:07.581891  108327 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 04:27:07.581974  108327 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 04:27:07.581994  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 04:27:07.581999  108327 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 04:27:07.582009  108327 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 04:27:07.582013  108327 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0920 04:27:07.582039  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 04:27:07.582121  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0920 04:27:07.582161  108327 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 04:27:07.582179  108327 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 04:27:07.582167859 +0000 UTC m=+295.880935393. Adding it to the Taint queue.
I0920 04:27:07.582092  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"3d4c3db7-aa91-47f2-9f17-67273a75878c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 04:27:07.582231  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"98377c17-ec0f-4818-a8bb-b2d06697a430", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 04:27:07.582238  108327 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 04:27:07.582244  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"05c0d0f6-47a4-42e5-853b-2b54522aa169", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 04:27:07.582420  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (1.044313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:07.583829  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.329347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.585639  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.422359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.586260  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.091413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:07.586592  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:27:07.580936695 +0000 UTC m=+295.879704229,}] Taint to Node node-2
I0920 04:27:07.586635  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:07.586635  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:27:07.586654  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:27:07 +0000 UTC}]
I0920 04:27:07.586719  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0 at 2019-09-20 04:27:07.58671009 +0000 UTC m=+295.885477650 to be fired at 2019-09-20 04:30:27.58671009 +0000 UTC m=+495.885477650
I0920 04:27:07.586765  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:07.586774  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:27:07 +0000 UTC}]
I0920 04:27:07.586808  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0 at 2019-09-20 04:27:07.586802211 +0000 UTC m=+295.885569770 to be fired at 2019-09-20 04:30:27.586802211 +0000 UTC m=+495.885569770
I0920 04:27:07.588360  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (2.270208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.589781  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (469.84µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.592348  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.882818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.592647  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:27:07.589070414 +0000 UTC m=+295.887837948,}] Taint to Node node-2
I0920 04:27:07.592709  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:27:07.609090  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.037633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.708737  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.748449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.808483  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.467512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:07.850752  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.850765  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.850770  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.853533  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.853788  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.854647  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.855200  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:07.908945  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.924597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.009156  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.067562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.108583  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.591772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.209224  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.142839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.276219  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.276607  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.277073  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.277116  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.277687  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.279168  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.308867  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.785039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.408709  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.71761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.485489  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.508940  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.931159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.608856  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.859613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.708866  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.896164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.809135  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.017128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:08.851000  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.851011  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.851010  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.853766  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.854043  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.854871  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.855364  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:08.908720  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.653726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.008650  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.720392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.109005  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.044881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.209123  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.103643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.276417  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.276791  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.277240  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.277271  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.277958  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.279313  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.308922  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.850856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.408810  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.799887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.485688  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.508950  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.885061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.609219  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.11073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.708700  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.690457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.808878  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.851362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:09.851233  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.851270  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.851545  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.853931  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.854222  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.855051  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.855531  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:09.908938  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.914158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.009238  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.171125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.109226  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.105975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.209004  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.965035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.276608  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.277050  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.277364  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.277419  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.278206  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.279495  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.308762  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.726116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.408723  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.734748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.485871  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.508968  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.942559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.609038  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.01791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.708731  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.711057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.808747  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.725339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:10.851485  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.851478  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.851797  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.854238  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.854376  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.855228  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.855682  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:10.908720  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.728153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.008686  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.66852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.108870  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.845886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.208946  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.942004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.276813  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.277225  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.277485  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.277556  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.278448  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.279696  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.308918  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.902651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.408925  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.920923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.486098  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.508767  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.723427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.617443  108327 httplog.go:90] GET /api/v1/nodes/node-2: (9.440191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.708835  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.874775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.808746  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.741082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:11.851745  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.851745  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.852082  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.854444  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.854579  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.855448  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.855834  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:11.908973  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.871812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.008998  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.86887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.081865  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.406179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.083437  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.136011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.085133  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.208577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.108774  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.687759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.208798  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.795766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.277010  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.277429  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.277743  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.277749  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.278800  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.279931  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.308662  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.686728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.408806  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.754806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.458516  108327 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.752693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:12.460335  108327 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.184731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:12.461798  108327 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (980.831µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:12.486412  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.509213  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.877307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.574731  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.000354823s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:27:12.574805  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-1 was never updated by kubelet
I0920 04:27:12.574819  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-1 was never updated by kubelet
I0920 04:27:12.574826  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-1 was never updated by kubelet
I0920 04:27:12.578034  108327 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.668529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.578327  108327 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0920 04:27:12.578364  108327 controller_utils.go:124] Update ready status of pods on node [node-1]
I0920 04:27:12.578515  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"98377c17-ec0f-4818-a8bb-b2d06697a430", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0920 04:27:12.579055  108327 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (434.791µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.579856  108327 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.251018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.580260  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.006010325s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:27:12.580297  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 04:27:12.580308  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 04:27:12.580314  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 04:27:12.581013  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (1.890378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42654]
I0920 04:27:12.581049  108327 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (472.918µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.582523  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000452937s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:27:12.582651  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0920 04:27:12.582686  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0920 04:27:12.582707  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0920 04:27:12.583912  108327 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.085069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.584267  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:12.57847442 +0000 UTC m=+300.877242037,}] Taint to Node node-1
I0920 04:27:12.584313  108327 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 04:27:12.585276  108327 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.158804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42654]
I0920 04:27:12.585525  108327 httplog.go:90] PUT /api/v1/nodes/node-2/status: (4.837333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.585591  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (419.773µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.585650  108327 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0920 04:27:12.585865  108327 controller_utils.go:124] Update ready status of pods on node [node-0]
I0920 04:27:12.586053  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.011723665s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:27:12.586089  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-0 was never updated by kubelet
I0920 04:27:12.586100  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-0 was never updated by kubelet
I0920 04:27:12.586108  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-0 was never updated by kubelet
I0920 04:27:12.586161  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (484.894µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42654]
I0920 04:27:12.588117  108327 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (381.011µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.590936  108327 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"3d4c3db7-aa91-47f2-9f17-67273a75878c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0920 04:27:12.591446  108327 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (391.279µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0920 04:27:12.592621  108327 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (1.881781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42654]
I0920 04:27:12.592865  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.010720217s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:12.592900  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.010758828s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:12.592915  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.010773795s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:12.592925  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.010784705s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:12.592958  108327 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:27:12.592946129 +0000 UTC m=+300.891713689. Adding it to the Taint queue.
I0920 04:27:12.592981  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.010814326s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:27:12.592995  108327 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 04:27:12.593004  108327 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 04:27:12.593010  108327 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 04:27:12.593642  108327 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.452784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42662]
I0920 04:27:12.593710  108327 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.755303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42656]
I0920 04:27:12.593756  108327 store.go:362] GuaranteedUpdate of /a3a331f8-5bf8-422e-9003-d45c96363f0d/minions/node-2 failed because of a conflict, going to retry
I0920 04:27:12.593964  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.680669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42682]
I0920 04:27:12.594458  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.012364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42666]
I0920 04:27:12.594668  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:12.578344787 +0000 UTC m=+300.877112346,}] Taint to Node node-1
I0920 04:27:12.594695  108327 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 04:27:12.594747  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:12.584969179 +0000 UTC m=+300.883736740,}] Taint to Node node-2
I0920 04:27:12.594854  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:12.585367424 +0000 UTC m=+300.884134984,}] Taint to Node node-0
I0920 04:27:12.594868  108327 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 04:27:12.595248  108327 httplog.go:90] POST /api/v1/namespaces/default/events: (3.765181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42664]
I0920 04:27:12.595319  108327 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.119984ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42658]
I0920 04:27:12.595674  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (456.96µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42682]
E0920 04:27:12.595731  108327 node_lifecycle_controller.go:1037] Error updating node node-2: Operation cannot be fulfilled on nodes "node-2": the object has been modified; please apply your changes to the latest version and try again
I0920 04:27:12.596005  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:12.58505781 +0000 UTC m=+300.883825650,}] Taint to Node node-2
I0920 04:27:12.596325  108327 httplog.go:90] PUT /api/v1/nodes/node-0/status: (5.578729ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.596422  108327 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.900471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42660]
E0920 04:27:12.596532  108327 node_lifecycle_controller.go:1037] Error updating node node-0: Operation cannot be fulfilled on nodes "node-0": the object has been modified; please apply your changes to the latest version and try again
I0920 04:27:12.596652  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (430.434µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42682]
I0920 04:27:12.596818  108327 httplog.go:90] GET /api/v1/nodes/node-2: (955.219µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42666]
I0920 04:27:12.596891  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:12.585325625 +0000 UTC m=+300.884093185,}] Taint to Node node-0
I0920 04:27:12.596916  108327 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 04:27:12.597532  108327 httplog.go:90] GET /api/v1/nodes/node-0: (880.77µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.598099  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.817631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42656]
I0920 04:27:12.598459  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:02 +0000 UTC,}] Taint
I0920 04:27:12.599074  108327 store.go:362] GuaranteedUpdate of /a3a331f8-5bf8-422e-9003-d45c96363f0d/minions/node-2 failed because of a conflict, going to retry
I0920 04:27:12.599885  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.194289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42682]
I0920 04:27:12.600164  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:27:02 +0000 UTC,}] Taint
I0920 04:27:12.600646  108327 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (308.626µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.603419  108327 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.88469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.603681  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:27:12.603694  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:27:12.600165192 +0000 UTC m=+300.898932727,}] Taint to Node node-1
I0920 04:27:12.603702  108327 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:27:12 +0000 UTC}]
I0920 04:27:12.603728  108327 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:27:12.603847  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:27:12.603916  108327 taint_manager.go:438] Updating known taints on node node-1: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:27:12 +0000 UTC}]
I0920 04:27:12.608036  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.191189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.617635  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.035457565s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:12.617694  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.035526302s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:12.617708  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.03554116s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:12.617724  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.035554792s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:12.617957  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.043621147s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:12.618010  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.043683335s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:12.618031  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.043705142s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:12.618046  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.04372003s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:12.618118  108327 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:27:12.618095428 +0000 UTC m=+300.916862989. Adding it to the Taint queue.
I0920 04:27:12.618155  108327 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 04:27:12.618907  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (527.486µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42660]
I0920 04:27:12.618907  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (528.652µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.622187  108327 store.go:362] GuaranteedUpdate of /a3a331f8-5bf8-422e-9003-d45c96363f0d/minions/node-2 failed because of a conflict, going to retry
I0920 04:27:12.622954  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:12.623055  108327 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:27:12.623156  108327 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 04:27:12.623201  108327 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0 at 2019-09-20 04:27:12.623197153 +0000 UTC m=+300.921964707
I0920 04:27:12.622956  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:12.623256  108327 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:27:12.623276  108327 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 04:27:12.623294  108327 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0 at 2019-09-20 04:27:12.623291133 +0000 UTC m=+300.922058684
I0920 04:27:12.623355  108327 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0
I0920 04:27:12.623487  108327 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0
I0920 04:27:12.623892  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.309019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42660]
I0920 04:27:12.624504  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (4.729681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.624785  108327 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:27:12.617779583 +0000 UTC m=+300.916547175,}] Taint to Node node-2
I0920 04:27:12.624955  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:12.624982  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:12.625019  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:27:07 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:27:12 +0000 UTC}]
I0920 04:27:12.625057  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0 at 2019-09-20 04:27:12.625045307 +0000 UTC m=+300.923812861 to be fired at 2019-09-20 04:30:32.625045307 +0000 UTC m=+500.923812861
I0920 04:27:12.624982  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:27:07 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:27:12 +0000 UTC}]
I0920 04:27:12.625117  108327 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0 at 2019-09-20 04:27:12.625107719 +0000 UTC m=+300.923875278 to be fired at 2019-09-20 04:30:32.625107719 +0000 UTC m=+500.923875278
I0920 04:27:12.625510  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (518.327µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.625864  108327 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/events: (1.815748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0920 04:27:12.625869  108327 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/events: (1.824182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0920 04:27:12.628264  108327 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.910054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.628608  108327 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:27:12.628707  108327 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 04:27:12.628743  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:12.628752  108327 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:27:12.628766  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:27:12 +0000 UTC}]
I0920 04:27:12.628774  108327 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:27:12 +0000 UTC}]
I0920 04:27:12.629413  108327 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (399.469µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.708981  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.97513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.808996  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.940105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:12.851974  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.851976  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.852344  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.854861  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.854989  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.855778  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.856056  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:12.908982  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.959216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.008988  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.004732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.108852  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.889164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.208902  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.852287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.277200  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.277709  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.277891  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.277897  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.278999  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.280087  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.308633  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.676554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.409117  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.027638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.486635  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.509100  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.092418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.609280  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.193939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.708695  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.728283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.808741  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.732623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:13.852123  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.852125  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.852727  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.855186  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.855192  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.855957  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.856215  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:13.909175  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.207772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.008675  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.698289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.109110  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.056046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.208986  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.915285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.277425  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.277931  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.278040  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.278236  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.279184  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.280250  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.308717  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.774752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.408860  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.811772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.486901  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.508780  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.779166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.609107  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.072664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.708823  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.838162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.808955  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.931233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:14.852343  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.852344  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.852895  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.855379  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.855450  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.856119  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.856372  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:14.908857  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.810588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.008911  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.873321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.108534  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.60276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.209038  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.006317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.277631  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.278137  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.278264  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.278376  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.279324  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.280444  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.308937  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.942368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.408788  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.722528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.487162  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.509001  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.910251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.608915  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.77716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.708913  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.883933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.749922  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.768317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:15.752061  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.473433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:15.753642  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.126885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:15.809147  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.056786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:15.852558  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.852558  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.853053  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.855597  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.855675  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.856292  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.856562  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:15.908769  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.7539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.008822  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.877609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.109172  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.037278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.209156  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.920978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.277836  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.278349  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.278421  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.278628  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.279506  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.280652  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.309231  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.117664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.408927  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.857468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.487378  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.509008  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.939755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.609036  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.022382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.708940  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.9282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.808905  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.881399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:16.852774  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.852774  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.853192  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.855768  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.855894  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.856450  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.856778  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:16.908846  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.867487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.008833  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.83506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.109186  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.122551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.208993  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.005351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.278007  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.278587  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.278607  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.278840  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.279643  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.280831  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.308721  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.773715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.409211  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.145233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.487611  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.508832  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.769355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.608846  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.82927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.624569  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.050229043s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:17.624633  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.05030694s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.624647  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.05032179s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.624663  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.050337453s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.624722  108327 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:27:17.624704103 +0000 UTC m=+305.923471657. Adding it to the Taint queue.
I0920 04:27:17.624749  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.050386303s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:17.624762  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.050399971s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.624816  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.050452012s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.624836  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.050471855s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.624884  108327 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:27:17.624857604 +0000 UTC m=+305.923625163. Adding it to the Taint queue.
I0920 04:27:17.624910  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.050666043s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:17.624925  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.050680594s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.624936  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.050691862s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.624946  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.050702085s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.624966  108327 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 04:27:17.624959275 +0000 UTC m=+305.923726833. Adding it to the Taint queue.
I0920 04:27:17.630010  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.04785696s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:17.630064  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.047922559s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.630079  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.047937714s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.630089  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.047948743s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.630139  108327 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:27:17.630123594 +0000 UTC m=+305.928891149. Adding it to the Taint queue.
I0920 04:27:17.630160  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.047993595s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:17.630171  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.048004995s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.630192  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.048026086s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.630203  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.048036432s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.630227  108327 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 04:27:17.630217431 +0000 UTC m=+305.928984991. Adding it to the Taint queue.
I0920 04:27:17.630260  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.048196131s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:17.630280  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.048215098s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.630290  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.048226982s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.630300  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.048236599s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:17.630321  108327 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:27:17.630313551 +0000 UTC m=+305.929081109. Adding it to the Taint queue.
I0920 04:27:17.709079  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.128273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.808755  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.794104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:17.853131  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.853145  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.853426  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.855994  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.856140  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.856619  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.856932  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:17.908939  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.916889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.008941  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.904942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.108883  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.8777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.209147  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.131906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.278353  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.278755  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.278773  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.278985  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.279842  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.281021  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.308896  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.83155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.408802  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.778626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.487859  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.508959  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.852945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.608998  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.966386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.708857  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.840234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.808885  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.891249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:18.853325  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.853367  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.853621  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.856180  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.856330  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.856737  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.857085  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:18.908717  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.770564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.008798  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.807145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.108931  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.889001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.208898  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.797414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.278511  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.278942  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.278978  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.279127  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.280035  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.281196  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.308814  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.821126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.408953  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.884552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.488125  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.509047  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.020065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.608873  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.888349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.708932  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.924996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.808899  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.924476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:19.853547  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.853612  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.853909  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.856379  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.856504  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.856930  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.857277  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:19.908992  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.995076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.008779  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.740137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.109017  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.027027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.208844  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.847226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.278967  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.279198  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.279330  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.279465  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.280190  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.281332  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.308773  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.818427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.408890  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.897307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.488341  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.508800  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.764496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.608908  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.873782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.708825  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.822135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.808640  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.633153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:20.853774  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.853778  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.854111  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.856578  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.856665  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.857128  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.857463  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:20.908940  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.906286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.008869  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.937222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.108932  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.921101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.208822  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.803516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.279201  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.279373  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.279529  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.279858  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.280354  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.281499  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.308744  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.774667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.408983  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.94683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.488601  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.509116  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.038539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.608680  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.714969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.708769  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.811088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.808932  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.892057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:21.854011  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.854044  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.854281  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.856765  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.856802  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.857308  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.857638  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:21.908972  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.914406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.008823  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.845545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.082376  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.784538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.084361  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.452041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.086238  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.223579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.109165  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.191578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.208875  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.852187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.279380  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.279514  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.279748  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.280133  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.280542  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.281691  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.308970  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.97063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.409009  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.993125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.488835  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.509489  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.387337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.608811  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.787967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.625227  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.050887761s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:22.625440  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.051110493s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.625510  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.05118314s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.625585  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.051258088s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.625704  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.051340245s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:22.625752  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.051388056s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.625784  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.051421108s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.625820  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.051456129s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.625896  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.051651034s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:22.625939  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.051693763s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.625980  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.051734369s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.626032  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.051786117s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.630559  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.04848271s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:22.630732  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.048665192s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.630771  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.048706774s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.630803  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.04873933s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.630922  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.048780014s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:22.630977  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.048834645s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.631017  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.048874453s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.631057  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.04891508s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.631152  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.048984941s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:22.631201  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.049032618s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.631237  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.049068825s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.631280  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.049112245s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:22.708909  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.867548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.808761  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.768527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:22.854235  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.854238  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.854487  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.856983  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.856976  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.857482  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.857799  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:22.908864  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.815705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.009186  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.139906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.109135  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.107369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.209144  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.990454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.279599  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.279618  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.280128  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.280274  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.280728  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.281841  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.308710  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.736219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.408988  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.915223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.489058  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.508839  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.862768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.608570  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.643221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.708947  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.991624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.808782  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.773363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:23.854453  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.854453  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.854653  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.857203  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.857208  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.857675  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.857983  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:23.908951  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.921495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.008910  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.952953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.109070  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.971145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.208716  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.678807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.279823  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.279823  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.280312  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.280444  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.280872  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.282000  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.308814  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.829529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.408907  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.857196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.489383  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.508788  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.78592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.608857  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.749594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.709073  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.889923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.808908  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.928931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:24.854707  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.854707  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.854860  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.857427  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.857432  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.857825  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.858154  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:24.909155  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.078274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.008992  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.981739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.109073  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.124037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.209226  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.159353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.280148  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.280231  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.280523  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.280659  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.281037  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.282184  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.310424  108327 httplog.go:90] GET /api/v1/nodes/node-2: (3.412936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.408958  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.901693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.489673  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.508943  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.881909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.609009  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.985773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.708950  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.883799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.750418  108327 httplog.go:90] GET /api/v1/namespaces/default: (2.088756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:25.752265  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.312088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:25.754073  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.287404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52112]
I0920 04:27:25.808816  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.791789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:25.854933  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.854981  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.855177  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.857672  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.857675  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.858025  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.858308  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:25.908813  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.813676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.009109  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.067034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.108614  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.598288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.208649  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.676428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.280619  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.280714  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.280789  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.280983  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.281191  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.282381  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.309012  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.966227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.408928  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.926701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.489905  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.508900  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.833028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.608813  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.854006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.708844  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.82342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.808972  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.831494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:26.855148  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.855192  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.855366  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.857905  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.857905  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.858283  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.858514  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:26.908906  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.83623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.009081  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.06735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.109116  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.983176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.208921  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.859425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.282320  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.282333  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.283365  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.283502  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.283577  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.283607  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.309190  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.184234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.409006  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.980562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.490105  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.508765  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.798809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.608844  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.835509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.626434  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.052057334s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:27.626541  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.052214348s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.626560  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.052234175s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.626573  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.052248229s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.626706  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.052341427s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:27.626725  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.052362228s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.626735  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.052372725s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.626808  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.052444524s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.626863  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.052618717s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:27.626881  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.052637071s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.626892  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.052647583s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.626924  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.052679379s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.631591  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.049514475s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:27.631656  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.049591993s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.631699  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.049635407s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.631712  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.049648897s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.631928  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.049648221s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:27.631994  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.049850104s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.632012  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.049870695s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.632023  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.049882058s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.632112  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.049945798s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:27.632129  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.049962673s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.632142  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.049975421s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.632156  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.049989676s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:27.708641  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.672471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.808963  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.938641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:27.855529  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.855529  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.855766  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.858172  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.858183  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.858548  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.858673  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:27.909024  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.942572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.009017  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.973299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.109081  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.027771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.209071  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.050131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.282524  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.282524  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.283587  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.283683  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.283767  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.284193  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.308983  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.82351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.408798  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.803104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.490323  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.509256  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.210782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.608717  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.736202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.708920  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.898642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.809007  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.021426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:28.855728  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.855725  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.855950  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.858352  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.858352  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.858744  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.858815  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:28.908965  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.936439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.008708  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.772984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.108903  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.933019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.208740  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.807814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.282940  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.282949  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.283753  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.283805  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.283980  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.284381  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.308678  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.636124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.408897  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.842513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.490582  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.509066  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.029524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.609059  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.090254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.710232  108327 httplog.go:90] GET /api/v1/nodes/node-2: (3.112704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.809238  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.21714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:29.855976  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.855976  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.856058  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.858483  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.858611  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.858995  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.859014  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:29.908799  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.762972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.008925  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.93456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.109204  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.900351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.208965  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.854837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.283172  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.283173  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.283964  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.283990  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.284165  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.284578  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.309050  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.038001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.408887  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.821542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.490818  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.509036  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.005005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.608854  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.814331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.708727  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.772631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.808961  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.952515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:30.856176  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.856196  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.856176  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.858630  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.858843  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.859280  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.859284  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:30.908815  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.762866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.008950  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.977079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.109022  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.019375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.208859  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.888066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.283417  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.283438  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.284146  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.284186  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.284361  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.284887  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.309033  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.994128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.408865  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.868691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.491084  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.508951  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.941972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.608861  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.869502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.708778  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.82688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.808958  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.949821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:31.856382  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.856420  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.856385  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.858840  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.859027  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.859541  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.859546  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:31.908963  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.94335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.009003  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.91836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.082375  108327 httplog.go:90] GET /api/v1/namespaces/default: (1.604856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.084423  108327 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.55147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.086346  108327 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.406345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.108759  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.756658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.209199  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.173244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.283641  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.283641  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.284265  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.284340  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.284574  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.285048  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.308923  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.875793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.408775  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.799186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.491329  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.509110  108327 httplog.go:90] GET /api/v1/nodes/node-2: (2.060865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.608654  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.708184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.627218  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.052878178s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:32.627296  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.052969461s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.627312  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.052986801s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.627327  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.052998959s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.627422  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.053056366s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:32.627448  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.053085838s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.627458  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.053095889s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.627468  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.053105203s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.627521  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.053277548s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:32.627533  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.053288783s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.627544  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.053300606s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.627554  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.053309775s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.632454  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.050377894s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:32.632517  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.050452513s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.632532  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.050466703s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.632543  108327 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.050480129s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.632614  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.050472958s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:32.632635  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.050494232s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.632645  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.050504417s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.632658  108327 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.050517517s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.632694  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.050528111s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:27:32.632705  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.050539447s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.632721  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.050554689s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.632730  108327 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.050564514s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:27:02 +0000 UTC,LastTransitionTime:2019-09-20 04:27:12 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:27:32.708960  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.967815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.711051  108327 httplog.go:90] GET /api/v1/nodes/node-2: (1.356712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
Sep 20 04:27:32.711: INFO: Waiting up to 15s for pod "testpod-0" in namespace "taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2" to be "updated with tolerationSeconds of 200"
I0920 04:27:32.712820  108327 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/pods/testpod-0: (1.111615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
Sep 20 04:27:32.713: INFO: Pod "testpod-0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.674433ms
Sep 20 04:27:32.713: INFO: Pod "testpod-0" satisfied condition "updated with tolerationSeconds of 200"
I0920 04:27:32.717754  108327 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/pods/testpod-0: (4.301359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.718021  108327 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0"}
I0920 04:27:32.718148  108327 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0 at 2019-09-20 04:27:32.718143418 +0000 UTC m=+321.016910970
I0920 04:27:32.718076  108327 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0"}
I0920 04:27:32.718258  108327 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0
I0920 04:27:32.718257  108327 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0 at 2019-09-20 04:27:32.718248856 +0000 UTC m=+321.017016426
I0920 04:27:32.718375  108327 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/testpod-0
I0920 04:27:32.720307  108327 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/events/testpod-0.15c60b042591208b: (1.712259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0920 04:27:32.720353  108327 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/events/testpod-0.15c60b042592354a: (1.717657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42660]
I0920 04:27:32.720538  108327 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions8302cfd2-aef7-421c-9bbc-d15430c938a2/pods/testpod-0: (971.664µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42704]
I0920 04:27:32.725119  108327 node_tree.go:113] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I0920 04:27:32.725136  108327 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 04:27:32.725218  108327 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 04:27:32.727176  108327 node_tree.go:113] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I0920 04:27:32.727215  108327 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 04:27:32.727235  108327 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 04:27:32.728847  108327 httplog.go:90] DELETE /api/v1/nodes: (7.815149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42660]
I0920 04:27:32.729006  108327 node_tree.go:113] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I0920 04:27:32.728967  108327 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 04:27:32.728973  108327 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 04:27:32.856535  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.856620  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.856643  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.859048  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.859210  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.859727  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:32.859735  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:33.283979  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:33.283979  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:33.284424  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:33.284435  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:33.284862  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:33.285249  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:27:33.491582  108327 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds (35.12s)
        taint_test.go:770: Failed to taint node in test 0 <node-2>, err: timed out waiting for the condition

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-041605.xml

Find update mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations
W0920 04:27:33.730212  108327 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 04:27:33.730237  108327 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 04:27:33.730251  108327 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 04:27:33.730261  108327 master.go:259] Using reconciler: 
I0920 04:27:33.731970  108327 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.732266  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.732373  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.733246  108327 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 04:27:33.733353  108327 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 04:27:33.733503  108327 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.733967  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.734163  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.734522  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.735172  108327 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:27:33.735219  108327 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.735236  108327 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:27:33.735425  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.735447  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.736200  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.736232  108327 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 04:27:33.736267  108327 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.736277  108327 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 04:27:33.736505  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.736536  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.737060  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.737356  108327 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 04:27:33.737442  108327 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 04:27:33.737573  108327 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.737770  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.737800  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.738493  108327 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 04:27:33.738545  108327 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 04:27:33.738715  108327 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.738868  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.738941  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.738968  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.739131  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.740238  108327 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 04:27:33.740294  108327 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 04:27:33.740382  108327 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.740551  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.740632  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.740891  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.741473  108327 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 04:27:33.741628  108327 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 04:27:33.741699  108327 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.741884  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.741946  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.742369  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.742733  108327 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 04:27:33.742802  108327 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 04:27:33.743027  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.743250  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.743282  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.743639  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.743884  108327 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 04:27:33.743939  108327 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 04:27:33.744089  108327 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.744347  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.744418  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.744529  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.745191  108327 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 04:27:33.745313  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.745444  108327 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 04:27:33.745518  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.745564  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.746445  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.746466  108327 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 04:27:33.746447  108327 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 04:27:33.746773  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.747084  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.747115  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.747216  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.747898  108327 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 04:27:33.747942  108327 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 04:27:33.748071  108327 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.748292  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.748318  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.748550  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.749176  108327 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 04:27:33.749282  108327 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 04:27:33.749458  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.749778  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.749808  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.750178  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.750484  108327 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 04:27:33.750515  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.750597  108327 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 04:27:33.750766  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.750781  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.751281  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.751629  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.751656  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.752340  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.752517  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.752538  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.753213  108327 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 04:27:33.753236  108327 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 04:27:33.753313  108327 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 04:27:33.753823  108327 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.754057  108327 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.754349  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.754722  108327 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.755307  108327 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.755861  108327 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.756349  108327 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.756674  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.756758  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.756894  108327 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.757280  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.757703  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.757840  108327 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.758326  108327 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.758547  108327 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.758918  108327 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.759097  108327 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.759576  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.759730  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.759826  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.759912  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.760029  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.760140  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.760295  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.760835  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.761020  108327 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.761574  108327 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.762107  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.762312  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.762545  108327 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.763044  108327 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.763220  108327 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.763751  108327 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.764209  108327 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.764696  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.765198  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.765418  108327 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.765521  108327 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 04:27:33.765546  108327 master.go:461] Enabling API group "authentication.k8s.io".
I0920 04:27:33.765568  108327 master.go:461] Enabling API group "authorization.k8s.io".
I0920 04:27:33.765726  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.765928  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.765954  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.766916  108327 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:27:33.767061  108327 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:27:33.767105  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.767456  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.767559  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.768352  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.769043  108327 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:27:33.769191  108327 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:27:33.769259  108327 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.769554  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.769713  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.770173  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.770555  108327 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:27:33.770577  108327 master.go:461] Enabling API group "autoscaling".
I0920 04:27:33.770725  108327 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:27:33.770739  108327 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.770905  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.770921  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.771677  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.771850  108327 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 04:27:33.771898  108327 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 04:27:33.772674  108327 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.772748  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.773287  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.773323  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.774156  108327 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 04:27:33.774183  108327 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 04:27:33.774189  108327 master.go:461] Enabling API group "batch".
I0920 04:27:33.774377  108327 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.774605  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.774628  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.775093  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.775900  108327 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 04:27:33.775926  108327 master.go:461] Enabling API group "certificates.k8s.io".
I0920 04:27:33.775985  108327 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 04:27:33.776049  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.776166  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.776189  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.776848  108327 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:27:33.776922  108327 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:27:33.777024  108327 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.777219  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.777250  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.777253  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.778247  108327 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:27:33.778274  108327 master.go:461] Enabling API group "coordination.k8s.io".
I0920 04:27:33.778289  108327 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 04:27:33.778464  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.778602  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.778629  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.778702  108327 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:27:33.778800  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.779403  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.779657  108327 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:27:33.779683  108327 master.go:461] Enabling API group "extensions".
I0920 04:27:33.779728  108327 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:27:33.779793  108327 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.780216  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.780254  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.780572  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.780938  108327 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 04:27:33.780977  108327 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 04:27:33.781103  108327 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.781224  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.781241  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.781689  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.781831  108327 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:27:33.781860  108327 master.go:461] Enabling API group "networking.k8s.io".
I0920 04:27:33.781891  108327 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.781912  108327 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:27:33.782076  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.782120  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.782744  108327 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 04:27:33.782771  108327 master.go:461] Enabling API group "node.k8s.io".
I0920 04:27:33.782775  108327 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 04:27:33.782785  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.782951  108327 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.783083  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.783185  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.783476  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.783906  108327 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 04:27:33.783989  108327 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 04:27:33.784029  108327 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.784142  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.784191  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.784923  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.785116  108327 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 04:27:33.785149  108327 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 04:27:33.785155  108327 master.go:461] Enabling API group "policy".
I0920 04:27:33.785277  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.785495  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.785524  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.785899  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.786061  108327 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:27:33.786087  108327 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:27:33.786177  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.786312  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.786329  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.786917  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.786954  108327 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:27:33.787018  108327 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:27:33.786988  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.787221  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.787245  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.787697  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.787871  108327 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:27:33.787897  108327 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:27:33.788045  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.788194  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.788218  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.788642  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.789533  108327 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:27:33.789575  108327 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:27:33.789580  108327 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.789689  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.789708  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.790512  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.790949  108327 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:27:33.791072  108327 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:27:33.791169  108327 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.791328  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.791362  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.792069  108327 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:27:33.792103  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.792107  108327 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.792193  108327 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:27:33.792207  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.792262  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.792909  108327 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:27:33.792938  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.792953  108327 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:27:33.793139  108327 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.793428  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.793529  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.794064  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.794194  108327 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:27:33.794228  108327 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:27:33.794229  108327 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 04:27:33.794924  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.796659  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.796814  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.796835  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.797498  108327 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:27:33.797565  108327 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:27:33.797690  108327 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.797818  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.797835  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.798562  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.798627  108327 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:27:33.798647  108327 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 04:27:33.798667  108327 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:27:33.798799  108327 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 04:27:33.798948  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.799082  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.799105  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.799672  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.799872  108327 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:27:33.799916  108327 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:27:33.800272  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.800612  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.800693  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.800806  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.801443  108327 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:27:33.801478  108327 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.801499  108327 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:27:33.801678  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.801700  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.802134  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.802367  108327 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 04:27:33.802425  108327 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 04:27:33.802446  108327 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.802625  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.802657  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.803253  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.803379  108327 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 04:27:33.803432  108327 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 04:27:33.803577  108327 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.803691  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.803716  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.804099  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.804289  108327 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:27:33.804313  108327 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:27:33.804504  108327 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.804695  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.804728  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.805089  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.805439  108327 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:27:33.805478  108327 master.go:461] Enabling API group "storage.k8s.io".
I0920 04:27:33.805516  108327 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:27:33.805620  108327 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.805740  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.806108  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.806213  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.806829  108327 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 04:27:33.806915  108327 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 04:27:33.806946  108327 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.807137  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.807163  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.807686  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.807975  108327 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 04:27:33.808049  108327 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 04:27:33.808185  108327 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:27:33.808325  108327 client.go:361] parsed scheme: "endpoint"
I0920 04:27:33.808348  108327 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:27:33.808801  108327 watch_cache.go:405] Replace watchCache (rev: 59454) 
I0920 04:27:33.809655  108327 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 04:27:33.809696  108327 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 04:27:33.809835  108327 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e8a8ef8e-a13c-42e5-9a2a-7c48946f4762", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVe