This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 5 failed / 2862 succeeded
Started2019-09-20 06:22
Elapsed28m23s
Revision
Buildergke-prow-ssd-pool-1a225945-7nsk
Refs master:53b3c896
82703:e7ac4c63
pod039d2d91-db6f-11e9-a2c5-42201fa4e0be
infra-commit2148f6bfa
pod039d2d91-db6f-11e9-a2c5-42201fa4e0be
repok8s.io/kubernetes
repo-commit62f41deff85513e7a8c4d15999cbe4c93b1ffc73
repos{u'k8s.io/kubernetes': u'master:53b3c8968e79153dd99acca93c823e93c9beb542,82703:e7ac4c63a6194a3843195861e89411e9f82bf9e3'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m19s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0920 06:48:36.400195  108489 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (139.30s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-063834.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0920 06:49:45.532198  108489 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 06:49:45.532294  108489 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 06:49:45.532356  108489 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 06:49:45.532418  108489 master.go:259] Using reconciler: 
I0920 06:49:45.534018  108489 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.534295  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.534469  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.535678  108489 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 06:49:45.535750  108489 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.535803  108489 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 06:49:45.536074  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.536093  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.537286  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.537722  108489 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 06:49:45.537813  108489 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 06:49:45.537847  108489 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.538182  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.538204  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.539073  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.539633  108489 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 06:49:45.539658  108489 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 06:49:45.539672  108489 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.539824  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.539852  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.540388  108489 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 06:49:45.540458  108489 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 06:49:45.540573  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.540671  108489 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.540823  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.540850  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.541232  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.541602  108489 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 06:49:45.541720  108489 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 06:49:45.541840  108489 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.541982  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.542013  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.542743  108489 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 06:49:45.542775  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.542777  108489 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 06:49:45.542940  108489 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.543049  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.543060  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.543583  108489 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 06:49:45.543758  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.543821  108489 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 06:49:45.543879  108489 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.543983  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.543996  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.544543  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.544679  108489 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 06:49:45.544781  108489 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 06:49:45.544873  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.545088  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.545112  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.545949  108489 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 06:49:45.546122  108489 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 06:49:45.546143  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.546608  108489 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.546873  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.547025  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.547183  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.548268  108489 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 06:49:45.548339  108489 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 06:49:45.548660  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.548784  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.548805  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.549537  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.549849  108489 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 06:49:45.549986  108489 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 06:49:45.550425  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.550536  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.550553  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.550758  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.551614  108489 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 06:49:45.551751  108489 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 06:49:45.551772  108489 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.551987  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.552136  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.552668  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.553422  108489 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 06:49:45.553529  108489 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 06:49:45.553723  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.553842  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.553861  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.554362  108489 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 06:49:45.554419  108489 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 06:49:45.554411  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.554780  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.554866  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.554888  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.555249  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.555778  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.555808  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.556520  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.556667  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.556691  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.557377  108489 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 06:49:45.557401  108489 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 06:49:45.557406  108489 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 06:49:45.558153  108489 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.558444  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.558545  108489 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.559359  108489 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.560414  108489 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.561072  108489 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.562139  108489 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.562871  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.563014  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.563351  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.564055  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.565025  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.565241  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.566039  108489 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.566437  108489 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.567044  108489 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.567433  108489 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.568551  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.568750  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.568916  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.569126  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.569371  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.569540  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.569682  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.570816  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.571374  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.572599  108489 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.573449  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.573836  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.574114  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.575216  108489 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.575883  108489 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.576785  108489 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.577527  108489 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.578387  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.579680  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.580050  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.580237  108489 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 06:49:45.580260  108489 master.go:461] Enabling API group "authentication.k8s.io".
I0920 06:49:45.580292  108489 master.go:461] Enabling API group "authorization.k8s.io".
I0920 06:49:45.580487  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.580714  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.580752  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.582014  108489 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 06:49:45.582183  108489 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 06:49:45.582240  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.582479  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.582518  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.583281  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.583608  108489 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 06:49:45.583972  108489 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 06:49:45.583988  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.584360  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.584387  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.584880  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.585251  108489 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 06:49:45.585281  108489 master.go:461] Enabling API group "autoscaling".
I0920 06:49:45.585309  108489 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 06:49:45.585525  108489 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.585649  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.585678  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.586214  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.586563  108489 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 06:49:45.586652  108489 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 06:49:45.587150  108489 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.587362  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.587465  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.588041  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.588282  108489 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 06:49:45.588322  108489 master.go:461] Enabling API group "batch".
I0920 06:49:45.588324  108489 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 06:49:45.588487  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.588792  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.588839  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.589316  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.589540  108489 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 06:49:45.589561  108489 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 06:49:45.589570  108489 master.go:461] Enabling API group "certificates.k8s.io".
I0920 06:49:45.589914  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.590200  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.590219  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.590929  108489 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 06:49:45.590969  108489 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 06:49:45.591318  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.591469  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.591483  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.592281  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.592321  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.593419  108489 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 06:49:45.593442  108489 master.go:461] Enabling API group "coordination.k8s.io".
I0920 06:49:45.593455  108489 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 06:49:45.593457  108489 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 06:49:45.593690  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.593883  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.593908  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.594304  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.594410  108489 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 06:49:45.594514  108489 master.go:461] Enabling API group "extensions".
I0920 06:49:45.594554  108489 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 06:49:45.594659  108489 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.594868  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.594890  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.595332  108489 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 06:49:45.595548  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.595611  108489 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 06:49:45.596055  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.596191  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.596211  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.597138  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.597493  108489 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 06:49:45.597512  108489 master.go:461] Enabling API group "networking.k8s.io".
I0920 06:49:45.597547  108489 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 06:49:45.597544  108489 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.597634  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.597645  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.598746  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.598954  108489 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 06:49:45.598985  108489 master.go:461] Enabling API group "node.k8s.io".
I0920 06:49:45.599112  108489 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 06:49:45.599537  108489 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.599718  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.599742  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.600469  108489 watch_cache.go:405] Replace watchCache (rev: 58793) 
I0920 06:49:45.602295  108489 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 06:49:45.602475  108489 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 06:49:45.602499  108489 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.602644  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.602668  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.603642  108489 watch_cache.go:405] Replace watchCache (rev: 58794) 
I0920 06:49:45.606912  108489 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 06:49:45.606941  108489 master.go:461] Enabling API group "policy".
I0920 06:49:45.606982  108489 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 06:49:45.606993  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.607680  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.607838  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.608841  108489 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 06:49:45.609020  108489 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 06:49:45.609343  108489 watch_cache.go:405] Replace watchCache (rev: 58794) 
I0920 06:49:45.609825  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.610060  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.610309  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.610418  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.611568  108489 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 06:49:45.611596  108489 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 06:49:45.611862  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.612407  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.612545  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.612512  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.613509  108489 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 06:49:45.613611  108489 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 06:49:45.613886  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.614230  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.614373  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.614328  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.615090  108489 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 06:49:45.615248  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.615440  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.615545  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.615504  108489 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 06:49:45.616510  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.616977  108489 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 06:49:45.617079  108489 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 06:49:45.617158  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.617324  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.617407  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.618031  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.618506  108489 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 06:49:45.618661  108489 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 06:49:45.618659  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.619070  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.619164  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.619572  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.619837  108489 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 06:49:45.619896  108489 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 06:49:45.620021  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.620143  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.620162  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.620862  108489 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 06:49:45.620901  108489 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 06:49:45.620911  108489 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 06:49:45.622492  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.623078  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.623650  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.623771  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.623805  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.624451  108489 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 06:49:45.624761  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.625001  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.625096  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.624515  108489 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 06:49:45.626046  108489 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 06:49:45.626068  108489 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 06:49:45.626282  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.626334  108489 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 06:49:45.627772  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.626367  108489 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 06:49:45.628986  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.629206  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.629232  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.631012  108489 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 06:49:45.631136  108489 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 06:49:45.631241  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.631424  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.631452  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.631736  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.632778  108489 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 06:49:45.632900  108489 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 06:49:45.633617  108489 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.633885  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.634004  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.634342  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.635224  108489 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 06:49:45.635257  108489 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.635296  108489 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 06:49:45.635409  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.635447  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.636013  108489 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 06:49:45.636108  108489 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 06:49:45.636293  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.636420  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.636549  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.636569  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.636686  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.637392  108489 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 06:49:45.637435  108489 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 06:49:45.637638  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.637800  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.637823  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.638783  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.639219  108489 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 06:49:45.639268  108489 master.go:461] Enabling API group "storage.k8s.io".
I0920 06:49:45.639372  108489 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 06:49:45.639624  108489 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.639799  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.639820  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.640732  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.641469  108489 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 06:49:45.641781  108489 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 06:49:45.643115  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.643377  108489 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.645399  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.645663  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.646657  108489 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 06:49:45.646811  108489 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 06:49:45.646992  108489 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.647119  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.647143  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.647727  108489 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 06:49:45.647789  108489 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 06:49:45.647992  108489 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.648148  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.648173  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.648526  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.648626  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.649942  108489 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 06:49:45.650036  108489 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 06:49:45.650288  108489 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.650445  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.650566  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.651158  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.651435  108489 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 06:49:45.651483  108489 master.go:461] Enabling API group "apps".
I0920 06:49:45.651521  108489 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 06:49:45.651527  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.651722  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.651745  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.652270  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.652666  108489 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 06:49:45.652824  108489 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 06:49:45.653614  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.653937  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.654082  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.654425  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.655401  108489 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 06:49:45.655451  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.655623  108489 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 06:49:45.655633  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.655832  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.656625  108489 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 06:49:45.656657  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.656808  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.656842  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.656806  108489 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 06:49:45.657998  108489 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 06:49:45.658032  108489 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 06:49:45.658124  108489 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.658214  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.658280  108489 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 06:49:45.658970  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:45.658998  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:45.659901  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.659977  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.660344  108489 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 06:49:45.660366  108489 master.go:461] Enabling API group "events.k8s.io".
I0920 06:49:45.660524  108489 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 06:49:45.660561  108489 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.661057  108489 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.661415  108489 watch_cache.go:405] Replace watchCache (rev: 58795) 
I0920 06:49:45.661520  108489 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.661674  108489 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.662018  108489 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.662509  108489 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.662862  108489 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.663106  108489 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.663315  108489 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.663454  108489 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.664688  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.665259  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.667034  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.667472  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.668495  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.668907  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.670022  108489 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.670288  108489 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.671358  108489 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.671817  108489 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:49:45.671900  108489 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 06:49:45.673292  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.673524  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.674169  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.676267  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.679326  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.681134  108489 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.681630  108489 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.686487  108489 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.688035  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.688616  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.689548  108489 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:49:45.689656  108489 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 06:49:45.690873  108489 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.691330  108489 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.692636  108489 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.693672  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.694574  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.695384  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.696856  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.697483  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.698088  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.699045  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.700347  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:49:45.700473  108489 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0920 06:49:45.701351  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.702028  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:49:45.702108  108489 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0920 06:49:45.702800  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.703884  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.704219  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.704989  108489 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.705588  108489 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.707050  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.707753  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:49:45.707906  108489 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0920 06:49:45.709324  108489 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.710466  108489 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.710743  108489 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.711906  108489 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.712214  108489 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.712784  108489 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.714017  108489 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.714796  108489 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.715223  108489 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.716835  108489 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.717130  108489 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.717570  108489 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:49:45.717681  108489 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0920 06:49:45.717758  108489 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0920 06:49:45.719163  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.721214  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.722993  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.724101  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.725662  108489 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"32191624-1a71-4204-8e71-cb59337eea94", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:45.731943  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:45.731973  108489 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0920 06:49:45.731982  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:45.732001  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:45.732007  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:45.732013  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:45.732059  108489 httplog.go:90] GET /healthz: (232.812µs) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:45.733241  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.153239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35452]
I0920 06:49:45.736481  108489 httplog.go:90] GET /api/v1/services: (1.601972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35452]
I0920 06:49:45.741238  108489 httplog.go:90] GET /api/v1/services: (1.68321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35452]
I0920 06:49:45.744541  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:45.744643  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:45.744737  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:45.744939  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:45.745090  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:45.745143  108489 httplog.go:90] GET /healthz: (723.993µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35452]
I0920 06:49:45.745480  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.088326ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:45.747599  108489 httplog.go:90] GET /api/v1/services: (829.812µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35452]
I0920 06:49:45.747605  108489 httplog.go:90] POST /api/v1/namespaces: (1.586843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:45.748924  108489 httplog.go:90] GET /api/v1/namespaces/kube-public: (820.398µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:45.749188  108489 httplog.go:90] GET /api/v1/services: (1.235735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35452]
I0920 06:49:45.751516  108489 httplog.go:90] POST /api/v1/namespaces: (1.436287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:45.752643  108489 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (745.093µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:45.754604  108489 httplog.go:90] POST /api/v1/namespaces: (1.453132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:45.833239  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:45.833416  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:45.833455  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:45.833486  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:45.833525  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:45.833680  108489 httplog.go:90] GET /healthz: (609.521µs) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:45.846071  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:45.846107  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:45.846117  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:45.846124  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:45.846142  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:45.846179  108489 httplog.go:90] GET /healthz: (301.998µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:45.933268  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:45.933313  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:45.933324  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:45.933334  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:45.933342  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:45.933390  108489 httplog.go:90] GET /healthz: (314.434µs) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:45.946141  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:45.946180  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:45.946194  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:45.946204  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:45.946212  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:45.946248  108489 httplog.go:90] GET /healthz: (368.638µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.033100  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.033143  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.033157  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.033167  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.033175  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.033228  108489 httplog.go:90] GET /healthz: (286.351µs) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:46.046042  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.046076  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.046086  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.046092  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.046098  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.046123  108489 httplog.go:90] GET /healthz: (239.585µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.093492  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.093782  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.093789  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.093793  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.093961  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.094023  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.133222  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.133266  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.133277  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.133283  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.133302  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.133330  108489 httplog.go:90] GET /healthz: (314.175µs) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:46.146136  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.146172  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.146202  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.146212  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.146220  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.146264  108489 httplog.go:90] GET /healthz: (354.131µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.206544  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.206645  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.206656  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.206681  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.206851  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.207315  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.233246  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.233313  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.233328  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.233339  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.233401  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.233433  108489 httplog.go:90] GET /healthz: (336.148µs) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:46.246838  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.246888  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.246899  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.246906  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.246920  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.246948  108489 httplog.go:90] GET /healthz: (290.693µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.299361  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.333165  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.333218  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.333233  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.333241  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.333250  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.333306  108489 httplog.go:90] GET /healthz: (316.785µs) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:46.346096  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.346131  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.346141  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.346148  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.346155  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.346191  108489 httplog.go:90] GET /healthz: (256.992µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.415746  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.433258  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.433356  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.433378  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.433385  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.433391  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.433429  108489 httplog.go:90] GET /healthz: (313.068µs) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:46.446167  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:49:46.446204  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.446214  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.446220  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.446226  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.446264  108489 httplog.go:90] GET /healthz: (271.041µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.490161  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.491272  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.491319  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.491389  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.491740  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.497358  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.497403  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:46.532117  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:46.532244  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:46.534054  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.534231  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.534367  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.534425  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.534574  108489 httplog.go:90] GET /healthz: (1.763804ms) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:46.547423  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.547465  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.547477  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.547488  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.547549  108489 httplog.go:90] GET /healthz: (1.628632ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.634741  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.634793  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.634802  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.634808  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.634870  108489 httplog.go:90] GET /healthz: (1.63436ms) 0 [Go-http-client/1.1 127.0.0.1:35454]
I0920 06:49:46.647171  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.647209  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.647220  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.647228  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.647269  108489 httplog.go:90] GET /healthz: (1.34921ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.732134  108489 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.237156ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.732336  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.389359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35456]
I0920 06:49:46.733840  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.878874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.733954  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.166906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35456]
I0920 06:49:46.734579  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.734605  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:49:46.734615  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:49:46.734623  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:49:46.734777  108489 httplog.go:90] GET /healthz: (985.092µs) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:46.734788  108489 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.948248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.734990  108489 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0920 06:49:46.735675  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (984.562µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35456]
I0920 06:49:46.735924  108489 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (926.348µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.736288  108489 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (797.802µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.736963  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (939.955µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35456]
I0920 06:49:46.738425  108489 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.772685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.738642  108489 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0920 06:49:46.738670  108489 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0920 06:49:46.739583  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (874.165µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35456]
I0920 06:49:46.739863  108489 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (3.478347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.740872  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (859.431µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35454]
I0920 06:49:46.742180  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (867.028µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.743280  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (778.655µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.744364  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (766.591µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.745828  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.060629ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.746428  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.746460  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:46.746497  108489 httplog.go:90] GET /healthz: (716.107µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:46.747967  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.65378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.748190  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0920 06:49:46.749470  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.018426ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.751473  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.494131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.751764  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0920 06:49:46.753134  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (944.38µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.755380  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.802323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.755594  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0920 06:49:46.756584  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (775.084µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.758404  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.46863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.758756  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0920 06:49:46.759946  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (884.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.762071  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.762248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.762308  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0920 06:49:46.764105  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.533855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.766600  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.994699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.767313  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0920 06:49:46.768615  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.020875ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.771132  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.859906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.771510  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0920 06:49:46.772843  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (915.111µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.775564  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.15382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.776250  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0920 06:49:46.777895  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.295385ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.782838  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.222294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.783397  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0920 06:49:46.784862  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.151346ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.787841  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.2245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.788343  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0920 06:49:46.789569  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (923.513µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.794145  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.682904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.794929  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0920 06:49:46.797620  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (2.417429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.800980  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.670627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.801375  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0920 06:49:46.802628  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.009423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.805435  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.263095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.805612  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0920 06:49:46.807199  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.320549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.809613  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.01325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.809857  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0920 06:49:46.811370  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.333599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.814012  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.169373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.814341  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0920 06:49:46.815565  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (971.536µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.817970  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.798461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.818565  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0920 06:49:46.820952  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.863587ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.824185  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.326038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.824594  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0920 06:49:46.826578  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.604551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.830228  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.758176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.830530  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 06:49:46.831778  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.003929ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.833867  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.833913  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:46.833990  108489 httplog.go:90] GET /healthz: (1.043868ms) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:46.834749  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.526279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.834971  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0920 06:49:46.836625  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.45385ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.839193  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.058103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.839423  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0920 06:49:46.840985  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.231411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.844438  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.01547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.844916  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0920 06:49:46.846527  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.146816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.846666  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.846736  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:46.847771  108489 httplog.go:90] GET /healthz: (1.819504ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:46.849020  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.876823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.849448  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0920 06:49:46.850844  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (968.267µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.852968  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.736325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.853315  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0920 06:49:46.854163  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (709.807µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.857096  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.331779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.857426  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0920 06:49:46.858757  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.124572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.861437  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.883419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.861719  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0920 06:49:46.863687  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.624598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.866603  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.348205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.866982  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0920 06:49:46.868452  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.053293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.870970  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.832065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.871161  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0920 06:49:46.873504  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (2.032608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.878004  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.834925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.878283  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 06:49:46.880253  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.315053ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.882954  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.26893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.883428  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 06:49:46.884996  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.072294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.887853  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.084392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.888490  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 06:49:46.889953  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.144683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.892115  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.726294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.892340  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 06:49:46.893684  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.094461ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.896432  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.306498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.896829  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 06:49:46.898298  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.238477ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.901285  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.4885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.901681  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 06:49:46.903292  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.383591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.907604  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.674632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.907842  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 06:49:46.909891  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.82175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.912024  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.633861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.912308  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 06:49:46.913630  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.097869ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.916051  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.897996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.916318  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 06:49:46.917348  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (833.89µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.919643  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.785077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.919882  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 06:49:46.921598  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.422659ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.924182  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.796803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.924522  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0920 06:49:46.926209  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.397894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.928606  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.038015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.929037  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 06:49:46.930303  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (883.363µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.933501  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.455394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.933684  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.933763  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:46.934206  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0920 06:49:46.934436  108489 httplog.go:90] GET /healthz: (975.508µs) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:46.935644  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.168887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.938415  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.251316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.938694  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 06:49:46.940130  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (958.761µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.942374  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.703955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.942855  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 06:49:46.944563  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.514183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.946418  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:46.946591  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:46.947502  108489 httplog.go:90] GET /healthz: (1.712346ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:46.947350  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.289387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.948358  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 06:49:46.949526  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (960.225µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.951889  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.859019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.952215  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 06:49:46.953874  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.450702ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.957389  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.059234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.957729  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 06:49:46.959019  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (854.573µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.961639  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.024947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.962074  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0920 06:49:46.965208  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (2.426551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.968557  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.627964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.968853  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 06:49:46.969989  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (806.809µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.972693  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.076499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.972953  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0920 06:49:46.975870  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.422934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.978479  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.042416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.978861  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 06:49:46.980144  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.088547ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.983250  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.33992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.983679  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 06:49:46.985036  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.08062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.988421  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.649252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.988603  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 06:49:46.989805  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.016001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.993072  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.101128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:46.993402  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 06:49:47.012772  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.507863ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.033886  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.742226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.034023  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.034046  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.034078  108489 httplog.go:90] GET /healthz: (973.015µs) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:47.034319  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 06:49:47.047153  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.047314  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.047442  108489 httplog.go:90] GET /healthz: (1.449471ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.052770  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.634155ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.074443  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.280319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.074799  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0920 06:49:47.092576  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.407534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.093657  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.093949  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.093956  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.094130  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.094158  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.094003  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.099417  108489 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:49:47.099448  108489 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:49:47.099688  108489 factory.go:541] Unable to schedule taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1: no fit: 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.; waiting
I0920 06:49:47.099799  108489 factory.go:615] Updating pod condition for taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 to (PodScheduled==False, Reason=Unschedulable)
I0920 06:49:47.102034  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.870151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:49:47.102042  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.879859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:49:47.102495  108489 generic_scheduler.go:337] Preemption will not help schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 on any node.
I0920 06:49:47.114994  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.677813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.115579  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0920 06:49:47.133548  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.428812ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.134922  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.134954  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.135032  108489 httplog.go:90] GET /healthz: (1.91729ms) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:47.147144  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.147179  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.147320  108489 httplog.go:90] GET /healthz: (1.242699ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.153755  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.649197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.154072  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0920 06:49:47.172387  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.238239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.193511  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.370005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.193924  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0920 06:49:47.206748  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.206887  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.206922  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.206958  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.206974  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.207477  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.212410  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.347904ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.233465  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.324019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.234043  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0920 06:49:47.234409  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.234431  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.234474  108489 httplog.go:90] GET /healthz: (1.467095ms) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:47.246841  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.246999  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.247236  108489 httplog.go:90] GET /healthz: (1.357113ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.253233  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.446913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.273615  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.388973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.273952  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 06:49:47.293231  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.977358ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.299569  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.313733  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.599464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.314167  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0920 06:49:47.332859  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.708352ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.334090  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.334220  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.334396  108489 httplog.go:90] GET /healthz: (1.364767ms) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:47.347115  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.347246  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.347375  108489 httplog.go:90] GET /healthz: (1.439183ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.353469  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.410164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.354217  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0920 06:49:47.372920  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.701675ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.393342  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.244482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.394007  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0920 06:49:47.412863  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.548601ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.415974  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.434380  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.223285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.434668  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.434742  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.434782  108489 httplog.go:90] GET /healthz: (1.805416ms) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:47.435090  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0920 06:49:47.447278  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.447437  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.447588  108489 httplog.go:90] GET /healthz: (1.620601ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.453461  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.050661ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.473631  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.446935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.474021  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 06:49:47.490348  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.491468  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.491570  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.491692  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.491910  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.492843  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.569018ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.497529  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.497696  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:47.514003  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.802748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.514287  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 06:49:47.534031  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.534075  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.534129  108489 httplog.go:90] GET /healthz: (1.070579ms) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:47.534812  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (3.674582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.547150  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.547187  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.547238  108489 httplog.go:90] GET /healthz: (1.30331ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.553483  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.360851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.553837  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 06:49:47.572814  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.690227ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.594052  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.941689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.594503  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 06:49:47.613414  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.257597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.633405  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.351608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.633864  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.633901  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.633926  108489 httplog.go:90] GET /healthz: (1.003077ms) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:47.633978  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 06:49:47.647018  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.647054  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.647109  108489 httplog.go:90] GET /healthz: (1.200831ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.652454  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.432598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.673646  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.546468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.674265  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 06:49:47.693093  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.856825ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.713763  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.567766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.714071  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 06:49:47.732535  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.30184ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.734520  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.734557  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.734617  108489 httplog.go:90] GET /healthz: (1.616779ms) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:47.747189  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.747346  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.747573  108489 httplog.go:90] GET /healthz: (1.536584ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.753598  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.548085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.754191  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 06:49:47.772842  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.72646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.794297  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.112316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.794596  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 06:49:47.812960  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.820241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.833600  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.351893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.833903  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.833933  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.833958  108489 httplog.go:90] GET /healthz: (948.622µs) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:47.834272  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 06:49:47.847016  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.847053  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.847100  108489 httplog.go:90] GET /healthz: (1.094868ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.852781  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.645696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.873772  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.526436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.874014  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0920 06:49:47.892828  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.718484ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.913670  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.459894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.914347  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 06:49:47.933289  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.118885ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:47.934206  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.934473  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.934790  108489 httplog.go:90] GET /healthz: (1.840234ms) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:47.947131  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:47.947169  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:47.947221  108489 httplog.go:90] GET /healthz: (1.359961ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.953722  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.533385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.954048  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0920 06:49:47.972816  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.650537ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.993288  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.179485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:47.993596  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 06:49:48.012571  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.44823ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.034044  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.798479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.034491  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.034617  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.034795  108489 httplog.go:90] GET /healthz: (1.877409ms) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:48.035046  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 06:49:48.047041  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.047236  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.047524  108489 httplog.go:90] GET /healthz: (1.571744ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.052604  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.390336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.074488  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.086769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.074844  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 06:49:48.092372  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.278542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.093789  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.094192  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.094290  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.094431  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.094419  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.094421  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.113162  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.020944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.113475  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 06:49:48.132430  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.261713ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.133993  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.134158  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.134296  108489 httplog.go:90] GET /healthz: (1.375595ms) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:48.147289  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.147329  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.147378  108489 httplog.go:90] GET /healthz: (1.315519ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.153243  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.216851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.153582  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 06:49:48.172595  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.40636ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.193409  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.272118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.193780  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0920 06:49:48.206991  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.207047  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.207122  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.207137  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.207153  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.207802  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.212660  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.580469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.233814  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.233854  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.233893  108489 httplog.go:90] GET /healthz: (982.587µs) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:48.234012  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.697248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.234449  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 06:49:48.247111  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.247196  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.247248  108489 httplog.go:90] GET /healthz: (1.200957ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.253106  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.946255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.274220  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.076395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.274552  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0920 06:49:48.292688  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.440161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.300047  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.313918  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.49741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.314223  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 06:49:48.332914  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.696411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.333814  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.333927  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.334102  108489 httplog.go:90] GET /healthz: (1.140545ms) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:48.347788  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.347843  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.347897  108489 httplog.go:90] GET /healthz: (1.777217ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.354004  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.861209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.354378  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 06:49:48.372870  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.734168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.393388  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.210554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.393741  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 06:49:48.412882  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.670193ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.416213  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.433810  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.592767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.434253  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 06:49:48.434972  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.435078  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.435261  108489 httplog.go:90] GET /healthz: (2.280258ms) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:48.447102  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.447255  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.447413  108489 httplog.go:90] GET /healthz: (1.525066ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.452363  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.302766ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.473421  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.288468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.473761  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 06:49:48.490549  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.491636  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.491809  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.491899  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.492044  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.492892  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.561309ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.494671  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.230125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.497806  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.497925  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:48.514843  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.019655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.515296  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0920 06:49:48.532687  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.442906ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.533969  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.534013  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.534063  108489 httplog.go:90] GET /healthz: (948.979µs) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:48.534799  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.418635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.546891  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.546926  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.546979  108489 httplog.go:90] GET /healthz: (1.098106ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.554313  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.806019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.554566  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 06:49:48.572540  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.342712ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.574720  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.403187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.593867  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.736523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.594227  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 06:49:48.612358  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.315053ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.614321  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.432703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.633483  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.215825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.633893  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 06:49:48.634326  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.634363  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.634391  108489 httplog.go:90] GET /healthz: (1.479386ms) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:48.646864  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.647069  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.647228  108489 httplog.go:90] GET /healthz: (1.273385ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.652672  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.59626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.656021  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.564559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.673341  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.152739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.673807  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 06:49:48.692748  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.656917ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.694991  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.719104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.713413  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.340886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.713680  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 06:49:48.733127  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.768711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.734162  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.734265  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.734457  108489 httplog.go:90] GET /healthz: (1.578569ms) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:48.734952  108489 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.30153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.747257  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.747673  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.747767  108489 httplog.go:90] GET /healthz: (1.575255ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.753346  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.291653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.753625  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 06:49:48.772667  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.52953ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.775632  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.091642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.794218  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.974237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.794598  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0920 06:49:48.813565  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.343355ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.816344  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.065831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.833654  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.349026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.834154  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.834191  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.834230  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 06:49:48.834243  108489 httplog.go:90] GET /healthz: (1.327157ms) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:48.847336  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.847372  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.847417  108489 httplog.go:90] GET /healthz: (1.443304ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.852851  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.741676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.855135  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.577426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.874149  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.652148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.874458  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 06:49:48.892693  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.565542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.895098  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.565276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.914523  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.305794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.914813  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 06:49:48.933323  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.142904ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.935732  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.64502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:48.936072  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.936097  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.936126  108489 httplog.go:90] GET /healthz: (3.134183ms) 0 [Go-http-client/1.1 127.0.0.1:35478]
I0920 06:49:48.947296  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:48.947338  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:48.947396  108489 httplog.go:90] GET /healthz: (1.412226ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.954025  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.479393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.954369  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 06:49:48.972512  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.418638ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.974940  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.552469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.993733  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.426477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:48.994196  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 06:49:49.012207  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.155581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.014475  108489 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.72482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.034024  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.838573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.034025  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:49:49.034143  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:49:49.034173  108489 httplog.go:90] GET /healthz: (1.22171ms) 0 [Go-http-client/1.1 127.0.0.1:35480]
I0920 06:49:49.034658  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 06:49:49.047522  108489 httplog.go:90] GET /healthz: (1.710163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.049940  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.582607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.053030  108489 httplog.go:90] POST /api/v1/namespaces: (2.515619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.055365  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.782532ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.059474  108489 httplog.go:90] POST /api/v1/namespaces/default/services: (3.585492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.061354  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.211179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.066956  108489 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (5.12525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.093980  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.094370  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.094580  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.094710  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.094739  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.094753  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.136045  108489 httplog.go:90] GET /healthz: (1.562134ms) 200 [Go-http-client/1.1 127.0.0.1:35478]
W0920 06:49:49.137291  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137347  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137384  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137394  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137427  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137450  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137459  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137475  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137487  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137497  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137512  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.137556  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 06:49:49.137578  108489 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0920 06:49:49.137589  108489 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 06:49:49.137805  108489 shared_informer.go:197] Waiting for caches to sync for scheduler
I0920 06:49:49.137969  108489 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0920 06:49:49.137981  108489 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0920 06:49:49.138863  108489 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (609.663µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35478]
I0920 06:49:49.139626  108489 get.go:251] Starting watch for /api/v1/pods, rv=58793 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m49s
I0920 06:49:49.207209  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.207268  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.207299  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.207364  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.207375  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.207996  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.238338  108489 shared_informer.go:227] caches populated
I0920 06:49:49.238598  108489 shared_informer.go:204] Caches are synced for scheduler 
I0920 06:49:49.239509  108489 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.239549  108489 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.239865  108489 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.239888  108489 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240021  108489 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240034  108489 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240317  108489 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240340  108489 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240418  108489 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240442  108489 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240521  108489 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240534  108489 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240765  108489 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240780  108489 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240897  108489 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240914  108489 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240953  108489 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.240966  108489 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.242559  108489 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (471.714µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35814]
I0920 06:49:49.242590  108489 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (608.47µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35800]
I0920 06:49:49.243094  108489 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (425.76µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35802]
I0920 06:49:49.243310  108489 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=58795 labels= fields= timeout=5m13s
I0920 06:49:49.243353  108489 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (490.504µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35810]
I0920 06:49:49.243505  108489 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=58793 labels= fields= timeout=8m29s
I0920 06:49:49.243523  108489 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (646.411µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35812]
I0920 06:49:49.243804  108489 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (344.047µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35804]
I0920 06:49:49.243736  108489 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=58795 labels= fields= timeout=7m35s
I0920 06:49:49.244034  108489 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (411.54µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35806]
I0920 06:49:49.244290  108489 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (459.804µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35808]
I0920 06:49:49.244389  108489 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=58795 labels= fields= timeout=8m59s
I0920 06:49:49.244554  108489 get.go:251] Starting watch for /api/v1/services, rv=59034 labels= fields= timeout=6m56s
I0920 06:49:49.244665  108489 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=58794 labels= fields= timeout=6m58s
I0920 06:49:49.244918  108489 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=58793 labels= fields= timeout=6m25s
I0920 06:49:49.245088  108489 get.go:251] Starting watch for /api/v1/nodes, rv=58793 labels= fields= timeout=5m52s
I0920 06:49:49.245432  108489 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.245456  108489 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.246456  108489 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (417.565µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35820]
I0920 06:49:49.247326  108489 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (5.396231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0920 06:49:49.247407  108489 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=58793 labels= fields= timeout=8m50s
I0920 06:49:49.249190  108489 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=58795 labels= fields= timeout=6m54s
I0920 06:49:49.300498  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.339159  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339204  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339212  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339220  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339247  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339290  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339299  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339305  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339312  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339324  108489 shared_informer.go:227] caches populated
I0920 06:49:49.339335  108489 shared_informer.go:227] caches populated
I0920 06:49:49.376340  108489 httplog.go:90] POST /api/v1/namespaces: (35.779749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35860]
I0920 06:49:49.376880  108489 node_lifecycle_controller.go:327] Sending events to api server.
I0920 06:49:49.376953  108489 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0920 06:49:49.376977  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 06:49:49.377031  108489 taint_manager.go:162] Sending events to api server.
I0920 06:49:49.377097  108489 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 06:49:49.377120  108489 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0920 06:49:49.377132  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:49:49.377150  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 06:49:49.377179  108489 node_lifecycle_controller.go:488] Starting node controller
I0920 06:49:49.377253  108489 shared_informer.go:197] Waiting for caches to sync for taint
I0920 06:49:49.377443  108489 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.377474  108489 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.378793  108489 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (944.152µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35860]
I0920 06:49:49.382910  108489 get.go:251] Starting watch for /api/v1/namespaces, rv=59047 labels= fields= timeout=8m47s
I0920 06:49:49.416410  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.477400  108489 shared_informer.go:227] caches populated
I0920 06:49:49.477564  108489 shared_informer.go:227] caches populated
I0920 06:49:49.477950  108489 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.478041  108489 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.478406  108489 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.478429  108489 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.479037  108489 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.479065  108489 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0920 06:49:49.480089  108489 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (1.053898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35890]
I0920 06:49:49.480165  108489 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (670.39µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35894]
I0920 06:49:49.480951  108489 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=58793 labels= fields= timeout=9m12s
I0920 06:49:49.481019  108489 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (545.016µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35894]
I0920 06:49:49.481739  108489 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=58795 labels= fields= timeout=6m30s
I0920 06:49:49.481869  108489 get.go:251] Starting watch for /api/v1/pods, rv=58793 labels= fields= timeout=8m13s
I0920 06:49:49.490843  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.491960  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.492124  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.492273  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.492413  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.498075  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.498105  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:49.577424  108489 shared_informer.go:227] caches populated
I0920 06:49:49.577455  108489 shared_informer.go:204] Caches are synced for taint 
I0920 06:49:49.577537  108489 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 06:49:49.577944  108489 shared_informer.go:227] caches populated
I0920 06:49:49.577972  108489 shared_informer.go:227] caches populated
I0920 06:49:49.577980  108489 shared_informer.go:227] caches populated
I0920 06:49:49.577986  108489 shared_informer.go:227] caches populated
I0920 06:49:49.577992  108489 shared_informer.go:227] caches populated
I0920 06:49:49.577999  108489 shared_informer.go:227] caches populated
I0920 06:49:49.578013  108489 shared_informer.go:227] caches populated
I0920 06:49:49.578018  108489 shared_informer.go:227] caches populated
I0920 06:49:49.578029  108489 shared_informer.go:227] caches populated
I0920 06:49:49.578035  108489 shared_informer.go:227] caches populated
I0920 06:49:49.578049  108489 shared_informer.go:227] caches populated
I0920 06:49:49.578055  108489 shared_informer.go:227] caches populated
I0920 06:49:49.582447  108489 httplog.go:90] POST /api/v1/nodes: (3.857781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35894]
I0920 06:49:49.583139  108489 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0920 06:49:49.583534  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 06:49:49.583572  108489 taint_manager.go:438] Updating known taints on node node-0: []
I0920 06:49:49.584994  108489 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (544.652µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35900]
I0920 06:49:49.586192  108489 httplog.go:90] POST /api/v1/nodes: (2.814634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35894]
I0920 06:49:49.586652  108489 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0920 06:49:49.586817  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 06:49:49.586892  108489 taint_manager.go:438] Updating known taints on node node-1: []
I0920 06:49:49.587562  108489 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (532.635µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.590109  108489 httplog.go:90] POST /api/v1/nodes: (3.359816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35894]
I0920 06:49:49.590429  108489 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0920 06:49:49.590527  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:49:49.590553  108489 taint_manager.go:438] Updating known taints on node node-2: []
I0920 06:49:49.592265  108489 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.670719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.592435  108489 httplog.go:90] PATCH /api/v1/nodes/node-0: (5.883975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35900]
I0920 06:49:49.592510  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.586616039 +0000 UTC m=+301.208796386,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.586616231 +0000 UTC m=+301.208796559,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.586616476 +0000 UTC m=+301.208796803,}] Taint to Node node-1
I0920 06:49:49.592557  108489 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 06:49:49.592740  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.5832379 +0000 UTC m=+301.205418248,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.583238163 +0000 UTC m=+301.205418489,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.583238354 +0000 UTC m=+301.205418681,}] Taint to Node node-0
I0920 06:49:49.592782  108489 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 06:49:49.593756  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (2.416767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35904]
I0920 06:49:49.593883  108489 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/pods: (2.332095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35894]
I0920 06:49:49.594916  108489 scheduling_queue.go:830] About to try and schedule pod taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2
I0920 06:49:49.595007  108489 scheduler.go:530] Attempting to schedule pod: taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2
I0920 06:49:49.595066  108489 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b", Name:"testpod-2"}
I0920 06:49:49.596516  108489 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2", node "node-2"
I0920 06:49:49.596540  108489 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2", node "node-2": all PVCs bound and nothing to do
I0920 06:49:49.596621  108489 factory.go:606] Attempting to bind testpod-2 to node-2
I0920 06:49:49.597825  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.404536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35900]
I0920 06:49:49.598638  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.590500881 +0000 UTC m=+301.212681228,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.59050122 +0000 UTC m=+301.212681547,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.590501366 +0000 UTC m=+301.212681695,}] Taint to Node node-2
I0920 06:49:49.598737  108489 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 06:49:49.598938  108489 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/pods/testpod-2/binding: (2.033952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.599251  108489 scheduler.go:662] pod taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2 is bound successfully on node "node-2", 3 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0920 06:49:49.599371  108489 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b", Name:"testpod-2"}
I0920 06:49:49.601366  108489 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/events: (1.747997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.699799  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/pods/testpod-2: (4.679512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.702172  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/pods/testpod-2: (1.679586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.705115  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.804357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.710762  108489 httplog.go:90] PUT /api/v1/nodes/node-2/status: (4.140111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.711901  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (723.863µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.716319  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.28339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.716794  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49.710923406 +0000 UTC m=+301.333103751,}] Taint to Node node-2
I0920 06:49:49.718334  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (1.138302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.722977  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.685802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.723303  108489 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,}] Taint
I0920 06:49:49.813785  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.07612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:49.900195  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.999382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:49:49.902038  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.387346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:49:49.903803  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.32261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:49:49.913464  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.84232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.013441  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.841301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.094209  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.094549  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.094731  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.094869  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.094889  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.094900  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.099803  108489 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:49:50.099831  108489 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:49:50.100354  108489 factory.go:541] Unable to schedule taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1: no fit: 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.; waiting
I0920 06:49:50.100442  108489 factory.go:615] Updating pod condition for taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 to (PodScheduled==False, Reason=Unschedulable)
I0920 06:49:50.102568  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.824584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:49:50.102981  108489 generic_scheduler.go:337] Preemption will not help schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 on any node.
I0920 06:49:50.103102  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (2.242574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:49:50.113461  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.874119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.207468  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.207507  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.207468  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.207483  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.207614  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.208194  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.213534  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.992956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.243990  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.244321  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.244779  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.244878  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.247204  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.247827  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.300630  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.313732  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.058083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.413443  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.860708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.417606  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.480788  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.491099  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.492108  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.492265  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.492429  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.492540  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.498248  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.498248  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:50.514537  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.858168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.525628  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 30.022017311s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.525736  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 30.022133329s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.525791  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 30.022189352s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.525807  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 30.022207315s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.526316  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 30.022656215s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.526363  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 30.022709516s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.526400  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 30.02274627s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.526433  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 30.022779996s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.526523  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 30.023007409s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.526549  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 30.023034371s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.526582  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 30.023065732s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.526599  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 30.023084435s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:50.613598  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.904307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.713805  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.998448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.813665  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.857663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:50.915451  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.603335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.013571  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.954322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.094433  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.094766  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.094888  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.094942  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.095042  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.095057  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.113370  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.792172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.207674  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.207757  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.207806  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.207810  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.207840  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.208398  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.213993  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.390059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.244189  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.244554  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.245119  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.245124  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.247628  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.247924  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.300760  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.313739  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.09559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.413305  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.67342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.417801  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.481018  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.491290  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.492323  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.492560  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.492651  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.492746  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.498520  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.498520  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:51.513735  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.076215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.613434  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.774219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.714819  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.167018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.813559  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.89052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:51.914129  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.572758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.013412  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.804402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.094673  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.095053  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.095156  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.095179  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.095187  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.095259  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.113357  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.775208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.207855  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.207852  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.207929  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.208003  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.208126  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.208584  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.213900  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.29685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.244352  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.244735  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.245267  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.245279  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.247800  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.248052  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.301048  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.313593  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.905482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.413478  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.878276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.418040  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.481229  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.491478  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.492504  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.492793  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.492799  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.492974  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.498801  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.498990  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:52.515494  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.714474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.613528  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.842692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.713718  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.091069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.814675  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.890723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:52.913553  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.916099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.013733  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.068325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.094965  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.095235  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.095334  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.095312  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.095373  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.095338  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.113435  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.843397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.208028  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.208065  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.208080  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.208168  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.208262  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.208748  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.213448  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.845798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.244629  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.244914  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.245458  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.245458  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.248034  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.248261  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.301270  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.313682  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.00015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.384380  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.44501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
E0920 06:49:53.384521  108489 factory.go:590] Error getting pod permit-plugin1683d175-8852-4e4d-b7a4-65f8210a961d/signalling-pod for retry: Get http://127.0.0.1:36687/api/v1/namespaces/permit-plugin1683d175-8852-4e4d-b7a4-65f8210a961d/pods/signalling-pod: dial tcp 127.0.0.1:36687: connect: connection refused; retrying...
I0920 06:49:53.386532  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.462088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:49:53.388168  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.164936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:49:53.413521  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.864764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.418208  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.481483  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.491792  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.492646  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.492941  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.492949  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.493148  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.498998  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.499141  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:53.513481  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.735743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.613620  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.995947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.713884  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.06739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.818431  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.977296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:53.914069  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.478674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.014150  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.146725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.095172  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.095489  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.095497  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.095497  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.095497  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.095599  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.113560  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.978342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.208226  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.208226  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.208281  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.208300  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.208418  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.208915  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.213560  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.93991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.245004  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.245185  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.245741  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.245769  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.248337  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.248458  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.301483  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.313802  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.183104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.413565  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.925355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.418415  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.481882  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.491985  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.492799  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.493096  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.493103  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.493278  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.499230  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.499291  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:54.513626  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.861061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.577750  108489 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 06:49:54.577792  108489 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 06:49:54.577874  108489 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 06:49:54.577909  108489 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 06:49:54.577917  108489 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 06:49:54.577927  108489 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 06:49:54.577931  108489 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0920 06:49:54.577992  108489 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 06:49:54.578040  108489 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
I0920 06:49:54.578045  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"a48eaff0-1611-4683-b353-c9d4349c4ac2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 06:49:54.578080  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"363a5685-80a0-45e1-bd49-fc5b38ae6a80", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 06:49:54.578101  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"5ff08cf0-8384-4a06-bd39-eaa44518927b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
W0920 06:49:54.578070  108489 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 06:49:54.578148  108489 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 06:49:54.578128498 +0000 UTC m=+306.200308836. Adding it to the Taint queue.
I0920 06:49:54.578184  108489 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 06:49:54.581171  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (2.72701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.583514  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (1.829853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.586082  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (1.945394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.586558  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (538.496µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35900]
I0920 06:49:54.589578  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.092308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35900]
I0920 06:49:54.589832  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 06:49:54.585813505 +0000 UTC m=+306.207993825,}] Taint to Node node-2
I0920 06:49:54.589877  108489 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 06:49:54.590342  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:49:54.590366  108489 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 06:49:54 +0000 UTC}]
I0920 06:49:54.590417  108489 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2 at 2019-09-20 06:49:54.590407712 +0000 UTC m=+306.212588059 to be fired at 2019-09-20 06:49:54.590407712 +0000 UTC m=+306.212588059
I0920 06:49:54.590449  108489 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2
I0920 06:49:54.590764  108489 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2
I0920 06:49:54.594665  108489 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/pods/testpod-2: (3.998336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35900]
I0920 06:49:54.595500  108489 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/events: (4.595526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.613435  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.86724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.713554  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.980499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.814253  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.313607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:54.913623  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.09154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.013352  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.646373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.095365  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.095615  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.095689  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.095728  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.095744  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.095750  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.100541  108489 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:49:55.100649  108489 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:49:55.100941  108489 factory.go:541] Unable to schedule taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1: no fit: 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.; waiting
I0920 06:49:55.101016  108489 factory.go:615] Updating pod condition for taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 to (PodScheduled==False, Reason=Unschedulable)
I0920 06:49:55.103041  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.703858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:49:55.103045  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.621304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:49:55.103379  108489 generic_scheduler.go:337] Preemption will not help schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 on any node.
I0920 06:49:55.110974  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.603095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:49:55.113028  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.276973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.114758  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.543609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:49:55.116391  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.143618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:49:55.208472  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.208486  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.208544  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.208634  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.208776  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.209139  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.213586  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.040803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.245542  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.245542  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.245932  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.245935  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.248527  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.248542  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.301736  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.314085  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.443886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.414021  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.434299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.419024  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.482109  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.492237  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.492990  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.493279  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.493423  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.493296  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.499415  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.499457  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:55.513380  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.735273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.526898  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 35.023338813s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.528067  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 35.024541418s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.528234  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 35.024714977s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.528309  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 35.024791595s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.528463  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 35.024862682s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.528528  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 35.024927235s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.528589  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 35.024987079s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.528720  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 35.025116919s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.528849  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 35.025195082s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.528973  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 35.025318666s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.529049  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 35.025393602s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.529111  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 35.025455595s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:55.614104  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.430219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.713369  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.801465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.813641  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.919637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:55.913515  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.839211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.013476  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.887193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.095595  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.095786  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.095893  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.095910  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.095925  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.095932  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.114140  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.389454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.208648  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.208721  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.208755  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.208744  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.208965  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.209295  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.213488  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.866065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.245740  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.245886  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.246305  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.246326  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.248753  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.248802  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.301961  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.313694  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.990459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.413322  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.705188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.419231  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.482459  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.492452  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.493194  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.493561  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.493562  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.493913  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.499598  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.499635  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:56.513683  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.07177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.613221  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.597114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.713200  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.618875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.813597  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.9506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:56.914017  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.303263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.013922  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.327865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.095791  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.095999  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.096068  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.096087  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.096117  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.095942  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.114479  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.62947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.208814  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.208863  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.208878  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.208974  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.209109  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.209511  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.213569  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.934777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.245972  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.246263  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.246398  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.246530  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.248858  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.248959  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.302211  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.313803  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.133781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.413716  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.954707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.419461  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.482750  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.492653  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.493386  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.493736  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.493759  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.494059  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.499801  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.499824  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:57.513898  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.258559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.613256  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.703198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.713679  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.050947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.813651  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.996633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:57.913893  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.227827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.013229  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.663802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.095992  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.096274  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.096312  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.096317  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.096340  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.096560  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.113559  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.982975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.209021  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.209021  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.209031  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.209418  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.209455  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.209743  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.213611  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.002748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.246155  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.246515  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.246786  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.247082  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.249050  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.249051  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.302421  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.314333  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.493499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.413730  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.059235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.419803  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.482954  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.492968  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.493589  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.493897  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.493903  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.494287  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.500036  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.500068  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:58.513621  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.972912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.613585  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.887598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.714003  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.205303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.813305  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.776655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:58.913740  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.913528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.013669  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.064468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.050518  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.893563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.052447  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.381441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.054093  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.138613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.096176  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.096462  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.096462  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.096489  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.096462  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.096751  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.113542  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.887012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.209204  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.209211  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.209217  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.209568  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.209584  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.210004  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.213213  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.638836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.246349  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.246744  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.246944  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.247339  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.249358  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.249471  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.302651  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.314256  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.67489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.413769  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.195754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.420059  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.483258  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.493257  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.493830  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.494165  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.494170  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.494433  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.500378  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.500478  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:59.513731  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.103304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.578460  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.00044952s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:49:59.578531  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000529613s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:49:59.578553  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.00055187s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:49:59.578579  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000575517s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:49:59.582128  108489 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.829018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.582508  108489 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0920 06:49:59.582545  108489 controller_utils.go:124] Update ready status of pods on node [node-0]
I0920 06:49:59.582777  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"a48eaff0-1611-4683-b353-c9d4349c4ac2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0920 06:49:59.583342  108489 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (480.287µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35900]
I0920 06:49:59.584273  108489 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (1.48857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.584527  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.00647451s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:49:59.584576  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.006528509s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:49:59.584597  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.006549704s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:49:59.584610  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.006563541s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:49:59.585228  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (1.881746ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:49:59.587107  108489 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.172168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.587107  108489 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.725626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35900]
I0920 06:49:59.587647  108489 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0920 06:49:59.587688  108489 controller_utils.go:124] Update ready status of pods on node [node-1]
I0920 06:49:59.587785  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"363a5685-80a0-45e1-bd49-fc5b38ae6a80", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0920 06:49:59.588026  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:59.582611444 +0000 UTC m=+311.204791763,}] Taint to Node node-0
I0920 06:49:59.589245  108489 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (415.124µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0920 06:49:59.589307  108489 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (385.502µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36804]
I0920 06:49:59.589553  108489 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.262646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.589801  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.011683417s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:49:59.589867  108489 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 06:49:59.589876  108489 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 06:49:59.589882  108489 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 06:49:59.590170  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (2.109296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:49:59.592317  108489 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.25683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.592587  108489 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.254218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0920 06:49:59.592658  108489 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 06:49:59.593117  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (416.049µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.593256  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (390.694µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:49:59.595851  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:59.58841749 +0000 UTC m=+311.210597833,}] Taint to Node node-1
I0920 06:49:59.596532  108489 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (417.244µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0920 06:49:59.596742  108489 store.go:362] GuaranteedUpdate of /32191624-1a71-4204-8e71-cb59337eea94/minions/node-2 failed because of a conflict, going to retry
I0920 06:49:59.596931  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:49:59.596962  108489 taint_manager.go:438] Updating known taints on node node-2: []
I0920 06:49:59.596981  108489 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 06:49:59.596994  108489 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2 at 2019-09-20 06:49:59.596990143 +0000 UTC m=+311.219170488
I0920 06:49:59.597358  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.981332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.597456  108489 httplog.go:90] PATCH /api/v1/nodes/node-0: (5.786782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36804]
I0920 06:49:59.598066  108489 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,}] Taint
I0920 06:49:59.599805  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (5.712101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:49:59.600219  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:59.592507837 +0000 UTC m=+311.214688180,}] Taint to Node node-2
I0920 06:49:59.600581  108489 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.408252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.600884  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:49:59.600906  108489 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 06:49:54 +0000 UTC}]
I0920 06:49:59.600891  108489 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,}] Taint
I0920 06:49:59.600948  108489 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2 at 2019-09-20 06:49:59.600928159 +0000 UTC m=+311.223108505 to be fired at 2019-09-20 06:49:59.600928159 +0000 UTC m=+311.223108505
I0920 06:49:59.600989  108489 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2
I0920 06:49:59.601074  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (602.379µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:49:59.601275  108489 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2
I0920 06:49:59.603166  108489 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/pods/testpod-2: (1.956962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35902]
I0920 06:49:59.604343  108489 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/events/testpod-2.15c612cda294e2ea: (2.797933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:49:59.605020  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.381471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0920 06:49:59.605379  108489 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:49:49 +0000 UTC,}] Taint
I0920 06:49:59.613212  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.645681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:49:59.713791  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.000293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:49:59.813963  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.152329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:49:59.899793  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.700274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:49:59.901807  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.483403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:49:59.903645  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.255552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:49:59.913398  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.838564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.013796  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.95358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.096401  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.096784  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.096867  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.096934  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.096962  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.097016  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.113596  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.942091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.209444  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.209436  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.209458  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.209740  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.209812  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.210200  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.213746  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.125809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.246557  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.246910  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.247112  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.247556  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.249570  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.249592  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.302884  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.313784  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.075464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.414209  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.387885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.420579  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.483502  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.493587  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.494028  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.494482  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.494516  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.494638  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.500727  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.500727  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:00.513438  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.854768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.529517  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 40.025848507s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529587  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 40.025934886s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529602  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 40.025949996s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529632  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 40.025980737s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529730  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 40.026216255s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529745  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 40.026231277s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529756  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 40.026241619s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529766  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 40.026252417s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529811  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 40.026211649s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529827  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 40.026227218s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529852  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 40.026252511s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.529889  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 40.026289253s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:00.613933  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.287252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.713897  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.132199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.813816  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.063355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:00.913685  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.018649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.013483  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.839578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.096696  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.097033  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.097036  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.097187  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.097261  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.097370  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.113476  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.826826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.209674  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.209695  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.209674  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.209956  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.210005  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.210391  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.213388  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.85965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.246810  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.246978  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.247323  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.247752  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.249788  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.249798  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.303027  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.313891  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.238274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.414033  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.273374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.420858  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.483781  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.494066  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.494182  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.494743  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.494805  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.494815  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.500920  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.500923  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:01.514001  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.28526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.614168  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.416421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.713839  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.145186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.813740  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.946729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:01.913730  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.085999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.013877  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.193798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.096941  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.097212  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.097232  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.097347  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.097487  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.097554  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.113456  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.796268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.209902  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.210101  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.209903  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.209902  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.210410  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.210561  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.213578  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.013345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.247097  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.247128  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.247517  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.247989  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.250061  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.250082  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.303211  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.313690  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.989615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.413600  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.017886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.421069  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.484017  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.494335  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.494362  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.494988  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.494988  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.495215  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.501104  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.501104  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:02.513983  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.884598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.613748  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.044434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.713547  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.921728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.813560  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.94004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:02.913766  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.099216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.014100  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.480737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.097160  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.097443  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.097447  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.097569  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.097817  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.097930  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.113653  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.974029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.210257  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.210414  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.210509  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.210544  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.210600  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.210789  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.213448  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.85225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.247323  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.247383  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.247748  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.248227  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.250244  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.250244  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.303474  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.313962  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.203968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.384872  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.859222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:50:03.387510  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.074435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:50:03.389657  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.622337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:50:03.413567  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.946857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.421288  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.484233  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.494538  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.494569  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.495174  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.495192  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.495438  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.501469  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.501470  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:03.513962  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.244951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.613492  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.843291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.713450  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.797625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.813775  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.093739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:03.913600  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.005059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.013684  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.004985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.097528  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.097801  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.097823  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.097843  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.098006  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.098096  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.101937  108489 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:50:04.101970  108489 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:50:04.102156  108489 factory.go:541] Unable to schedule taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1: no fit: 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.; waiting
I0920 06:50:04.102202  108489 factory.go:615] Updating pod condition for taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 to (PodScheduled==False, Reason=Unschedulable)
I0920 06:50:04.104496  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.943814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:50:04.104502  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.906279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:04.104999  108489 generic_scheduler.go:337] Preemption will not help schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 on any node.
I0920 06:50:04.113331  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.732397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.210490  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.210597  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.210630  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.210655  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.210787  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.211019  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.213479  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.885528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.247731  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.247734  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.248002  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.248439  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.250531  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.250563  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.303723  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.313857  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.182009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.414017  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.265095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.421563  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.484440  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.494902  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.494921  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.495426  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.495480  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.495585  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.501816  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.501816  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:04.514424  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.854096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.597925  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.019905362s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:04.598016  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.020014655s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:04.598035  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.020034545s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:04.598055  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.020054806s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:04.598122  108489 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 06:50:04.598102457 +0000 UTC m=+316.220282802. Adding it to the Taint queue.
I0920 06:50:04.598221  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020172278s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:04.598257  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020210229s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:04.598272  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020225856s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:04.598296  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.020240679s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:04.598332  108489 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 06:50:04.598318633 +0000 UTC m=+316.220498979. Adding it to the Taint queue.
I0920 06:50:04.598368  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.020252801s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:04.598385  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.020270809s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:04.598412  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.02029832s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:04.598430  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.020316082s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:04.599940  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (724.372µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.604085  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.054665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.604547  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-20 06:50:04.598457657 +0000 UTC m=+316.220638003,}] Taint to Node node-2
I0920 06:50:04.604817  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:50:04.604849  108489 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 06:49:54 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 06:50:04 +0000 UTC}]
I0920 06:50:04.604894  108489 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2 at 2019-09-20 06:50:04.604878654 +0000 UTC m=+316.227059002 to be fired at 2019-09-20 06:50:04.604878654 +0000 UTC m=+316.227059002
W0920 06:50:04.604911  108489 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2}. Skipping.
I0920 06:50:04.605824  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (791.137µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.610739  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.650654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.611139  108489 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 06:50:04.611362  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:50:04.611386  108489 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 06:50:04 +0000 UTC}]
I0920 06:50:04.611423  108489 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2 at 2019-09-20 06:50:04.611410186 +0000 UTC m=+316.233590533 to be fired at 2019-09-20 06:55:04.611410186 +0000 UTC m=+616.233590533
W0920 06:50:04.611447  108489 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2}. Skipping.
I0920 06:50:04.613071  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.507603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.713937  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.157181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.813311  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.765482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:04.913859  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.059144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.014023  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.3145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.097752  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.098039  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.098049  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.098116  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.098199  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.098261  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.111829  108489 httplog.go:90] GET /api/v1/namespaces/default: (2.20042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:05.113517  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.899125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.113533  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.189728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:05.116269  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.692805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:05.210736  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.210766  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.210800  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.210820  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.210934  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.211238  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.213537  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.942863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.247955  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.247979  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.248221  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.248681  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.250734  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.250740  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.303958  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.313792  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.768696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.413478  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.813972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.421773  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.484822  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.495049  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.495124  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.495617  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.495623  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.495743  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.502248  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.502444  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:05.515900  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.995071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.530160  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 45.026630024s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530230  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 45.026715073s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530253  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 45.026738233s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530273  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 45.026758168s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530425  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 45.026820468s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530450  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 45.026850828s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530468  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 45.026868281s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530483  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 45.026879204s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530526  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 45.026874072s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530538  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 45.026886868s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530846  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 45.026896749s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.530870  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 45.027218683s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:05.614181  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.378801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.714153  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.313567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.813598  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.959759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:05.913876  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.088703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.013995  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.239095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.097998  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.098287  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.098357  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.098378  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.098400  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.098428  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.113558  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.899378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.210925  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.210952  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.211003  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.211021  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.211167  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.211423  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.213630  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.75107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.248163  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.248251  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.248469  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.248935  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.250939  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.251022  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.304411  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.314558  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.730601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.414017  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.188985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.422011  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.485181  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.495499  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.495531  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.495886  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.495906  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.495895  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.502502  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.502652  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:06.514059  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.272963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.614088  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.395148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.713954  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.039473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.813676  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.055941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:06.914235  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.516084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.013777  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.077934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.098345  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.098530  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.098554  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.098566  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.098569  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.098580  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.113793  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.148595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.211121  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.211156  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.211130  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.211151  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.211310  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.211656  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.213961  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.059354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.248675  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.248671  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.248807  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.249123  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.251180  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.251443  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.304722  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.314217  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.39256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.413402  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.834299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.422298  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.485612  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.495621  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.495933  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.496040  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.496072  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.496165  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.502765  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.502766  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:07.514122  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.44567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.613336  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.818689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.714024  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.152702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.814088  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.328062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:07.914149  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.401868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.014297  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.467265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.098799  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.098931  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.098966  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.098981  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.099733  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.099771  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.113995  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.290229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.211320  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.211409  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.211333  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.211333  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.211465  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.211838  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.213780  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.128263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.248929  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.248985  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.249006  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.249475  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.251614  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.251685  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.305002  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.313912  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.208612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.413786  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.135282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.422513  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.485907  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.495882  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.496197  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.496227  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.496235  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.496243  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.503180  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.503207  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:08.514329  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.458959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.613647  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.881129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.714526  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.747883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.813917  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.146775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:08.913963  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.161481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.013884  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.073648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.050679  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.842237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.053377  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.928143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.056238  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.020364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.098988  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.099141  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.099156  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.099155  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.100029  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.100089  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.114074  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.23759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.211678  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.211678  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.211691  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.211742  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.211743  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.212097  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.213938  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.095671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.249182  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.249267  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.249313  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.249678  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.251884  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.251884  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.305254  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.314093  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.32215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.414002  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.301949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.422844  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.486149  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.496099  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.496389  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.496416  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.496419  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.496503  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.503439  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.503448  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:09.513837  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.143272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.611741  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.033687823s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:09.611833  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.0338314s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:09.611849  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.033849094s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:09.611861  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.033861544s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:09.611934  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.033888112s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:09.611954  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.033908146s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:09.611968  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.033922606s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:09.611978  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.033932415s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:09.612023  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.033910389s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:09.612043  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.033930687s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:09.612053  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.03394079s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:09.612064  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.033951222s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:09.612096  108489 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 06:50:09.612085263 +0000 UTC m=+321.234265607. Adding it to the Taint queue.
I0920 06:50:09.614594  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.747057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.714486  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.780531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.814027  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.237485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:09.900479  108489 httplog.go:90] GET /api/v1/namespaces/default: (2.150424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:50:09.902865  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.698316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:50:09.904851  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.467801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:50:09.914128  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.330845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.014143  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.381751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.099383  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.099675  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.099488  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.099496  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.100405  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.100405  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.114031  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.202582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.212155  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.212190  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.212202  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.212156  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.212178  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.212335  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.214258  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.54506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.249421  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.249477  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.249474  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.250027  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.252083  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.252150  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.305796  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.314129  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.028487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.414040  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.441273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.423174  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.486368  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.496468  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.496569  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.496632  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.496647  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.496652  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.503843  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.503872  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:10.514068  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.289471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.531182  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 50.027652024s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531250  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 50.027735923s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531265  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 50.027750882s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531276  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 50.027762192s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531363  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 50.02776336s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531377  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 50.027777793s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531389  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 50.02778908s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531398  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 50.027799169s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531446  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 50.027794467s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531458  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 50.027806574s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531467  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 50.027815713s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.531481  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 50.027829279s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:10.614183  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.414672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.713962  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.132346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.813911  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.167801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:10.914017  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.155704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.013747  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.047459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.099932  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.099981  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.100004  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.100022  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.100611  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.100727  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.113883  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.099896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.212354  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.212366  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.212373  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.212391  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.212481  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.212417  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.214581  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.80662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.250023  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.250037  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.250259  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.250285  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.252337  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.252391  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.306040  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.313344  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.716494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.414493  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.683634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.423457  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.486830  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.496725  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.496747  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.496828  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.496837  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.496872  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.504096  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.504170  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:11.514125  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.345928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.614046  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.178787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.714238  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.491514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.813989  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.04754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:11.823054  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.815224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:11.825601  108489 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.001979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:11.828251  108489 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (2.03121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:11.914148  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.348164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.014002  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.29151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.100202  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.100242  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.100244  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.100200  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.100885  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.100937  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.113944  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.047335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.213050  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.213074  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.213101  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.213125  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.213162  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.213282  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.214216  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.583755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.250556  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.250685  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.250805  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.250841  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.252558  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.252754  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.306241  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.313963  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.12036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.414164  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.442868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.423751  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.487436  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.497189  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.497215  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.497365  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.497399  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.497481  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.504383  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.504405  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:12.514117  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.327852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.613869  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.097907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.713694  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.935009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.814210  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.334205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:12.914097  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.219744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.013746  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.933768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.100433  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.100470  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.100460  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.101082  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.101104  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.100486  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.114439  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.674681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.213586  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.213669  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.213683  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.213817  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.214029  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.214089  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.214641  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.867593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.250803  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.250988  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.251200  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.251235  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.252778  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.253008  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.306598  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.314226  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.452417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.385268  108489 httplog.go:90] GET /api/v1/namespaces/default: (2.065405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:50:13.387644  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.768696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:50:13.390587  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.228996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:50:13.414325  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.513955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.423979  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.487644  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.497419  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.497424  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.497526  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.497732  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.497774  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.504694  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.504810  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:13.513883  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.229711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.614086  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.21841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.714377  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.673744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.813963  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.164316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:13.913873  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.107555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.014398  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.432256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.101060  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.101060  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.101408  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.101417  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.101615  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.101621  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.113974  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.045534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.213761  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.080462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.214082  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.214312  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.214346  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.214351  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.214366  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.214369  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.251086  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.251260  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.251374  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.251417  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.252990  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.253205  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.306915  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.313845  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.037725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.414051  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.254056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.424352  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.487938  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.497733  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.497733  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.497741  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.497909  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.498007  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.504991  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.504991  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:14.514181  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.503235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.612392  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.03426019s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:14.612468  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.034352884s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:14.612490  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.034375492s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:14.612506  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.034391755s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:14.612605  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.034603278s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:14.612625  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.034624462s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:14.612663  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.034659784s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:14.612685  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.034684602s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:14.612770  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.034715871s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:14.612791  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.034744362s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:14.612811  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.03476018s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:14.612832  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.034783881s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:14.614213  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.382939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.713769  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.069497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.814258  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.442071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:14.914032  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.243299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.014455  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.743149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.101325  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.101631  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.101358  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.101844  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.101604  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.101908  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.104261  108489 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:50:15.104297  108489 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:50:15.104477  108489 factory.go:541] Unable to schedule taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1: no fit: 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.; waiting
I0920 06:50:15.104536  108489 factory.go:615] Updating pod condition for taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 to (PodScheduled==False, Reason=Unschedulable)
I0920 06:50:15.106914  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (2.037763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:15.106914  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.90379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:50:15.107379  108489 generic_scheduler.go:337] Preemption will not help schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 on any node.
I0920 06:50:15.111519  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.740997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:50:15.114115  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.768798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.114115  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.916349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:50:15.116379  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.735129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:50:15.214238  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.389603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.214309  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.214535  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.214631  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.214544  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.214568  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.214576  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.251443  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.251527  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.251726  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.251751  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.253259  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.253431  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.307183  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.314367  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.45767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.414264  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.467358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.424670  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.488263  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.498009  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.498006  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.498079  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.498208  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.498413  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.505387  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.505392  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:15.513846  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.063067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.531812  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 55.028196925s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.531904  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 55.028301544s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.531930  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 55.028328522s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.531946  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 55.028345343s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.532068  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 55.028415214s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.532091  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 55.028436695s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.532105  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 55.028453808s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.532114  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 55.028463219s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.532161  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 55.02864716s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.532171  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 55.028657855s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.532183  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 55.028669089s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.532197  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 55.028683469s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:15.614223  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.26115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.714851  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.207142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.814144  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.43802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:15.921028  108489 httplog.go:90] GET /api/v1/nodes/node-2: (6.160389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.016257  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.593881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.102196  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.102316  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.102579  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.102343  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.102523  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.104095  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.117605  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.436292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.213976  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.128967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.214537  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.214817  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.214822  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.214826  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.215048  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.215071  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.251682  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.251787  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.251881  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.251964  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.253659  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.253669  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.307425  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.313850  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.244207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.413944  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.252666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.424950  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.488754  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.498196  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.498196  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.498196  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.498369  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.498621  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.505840  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.505905  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:16.513458  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.830486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.613837  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.19257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.713574  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.881052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.813663  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.991191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:16.914017  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.910345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.013422  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.931536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.102404  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.102755  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.102825  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.102871  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.102785  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.104329  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.105299  108489 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:50:17.105392  108489 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:50:17.105569  108489 factory.go:541] Unable to schedule taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1: no fit: 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.; waiting
I0920 06:50:17.105653  108489 factory.go:615] Updating pod condition for taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 to (PodScheduled==False, Reason=Unschedulable)
I0920 06:50:17.107842  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.813864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:17.107845  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.821383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:50:17.108145  108489 generic_scheduler.go:337] Preemption will not help schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 on any node.
I0920 06:50:17.113278  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.677923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.213514  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.870938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.214775  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.215051  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.215063  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.215086  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.215223  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.215230  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.251923  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.251980  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.252030  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.252077  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.253784  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.253895  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.307643  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.313691  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.057485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.413599  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.936963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.425184  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.489257  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.498434  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.498434  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.498443  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.498519  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.498932  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.506025  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.506073  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:17.513423  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.878287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.613829  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.04439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.714029  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.324212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.813510  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.883996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:17.913727  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.016379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.013475  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.825323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.102737  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.103003  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.103027  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.103059  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.103077  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.104654  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.113563  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.966209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.213543  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.95316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.215027  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.215202  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.215220  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.215357  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.215421  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.215439  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.252203  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.252228  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.252297  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.252333  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.253977  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.254026  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.307852  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.313678  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.103507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.413972  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.269348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.425418  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.489444  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.498662  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.498686  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.498725  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.498743  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.499077  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.506386  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.506397  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:18.513721  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.007542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.613677  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.043653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.713891  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.233164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.813749  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.030831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:18.913559  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.879968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.013479  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.869824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.050777  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.800363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.053104  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.690088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.055100  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.434073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.102968  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.103172  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.103191  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.103198  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.103263  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.104766  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.113592  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.996731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.213145  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.685423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.215249  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.215362  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.215525  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.215526  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.215782  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.215784  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.252446  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.252446  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.252545  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.252462  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.254178  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.254178  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.308089  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.313884  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.235434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.413766  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.003631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.425670  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.489730  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.498961  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.498998  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.498959  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.498972  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.499241  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.506645  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.506671  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:19.513779  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.111742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.613175  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.035161972s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:19.613245  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.035243825s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:19.613267  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.035266815s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:19.613283  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.035282616s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:19.613354  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.035307415s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:19.613372  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.035325477s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:19.613389  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.035341738s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:19.613405  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.035358111s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:19.613440  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.824871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.613479  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.035365232s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:19.613509  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.03539525s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:19.613526  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.03541274s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:19.613541  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.035427125s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:49:49 +0000 UTC,LastTransitionTime:2019-09-20 06:49:59 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:50:19.715193  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.263656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.717178  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.441746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
Sep 20 06:50:19.717: INFO: Waiting up to 15s for pod "testpod-2" in namespace "taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b" to be "terminating"
I0920 06:50:19.719510  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/pods/testpod-2: (1.601515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
Sep 20 06:50:19.719: INFO: Pod "testpod-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13661ms
Sep 20 06:50:19.719: INFO: Pod "testpod-2" satisfied condition "terminating"
I0920 06:50:19.725768  108489 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/pods/testpod-2: (5.701819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.726086  108489 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b", Name:"testpod-2"}
I0920 06:50:19.726179  108489 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/testpod-2 at 2019-09-20 06:50:19.726176035 +0000 UTC m=+331.348356374
I0920 06:50:19.729097  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions43bf6697-75f3-4516-84da-833de9f9fc5b/pods/testpod-2: (1.718782ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.735447  108489 node_tree.go:113] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I0920 06:50:19.735489  108489 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 06:50:19.738023  108489 node_tree.go:113] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I0920 06:50:19.738103  108489 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 06:50:19.740553  108489 httplog.go:90] DELETE /api/v1/nodes: (11.004395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0920 06:50:19.740622  108489 node_tree.go:113] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I0920 06:50:19.740643  108489 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 06:50:19.900273  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.774976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:50:19.902390  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.459526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:50:19.904246  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.317985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:50:20.101065  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.774277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:50:20.103334  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.103346  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.103376  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.103377  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.103418  108489 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.618118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:50:20.103445  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.105135  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.105771  108489 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:50:20.105983  108489 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1
I0920 06:50:20.106011  108489 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.703387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:50:20.106477  108489 factory.go:541] Unable to schedule taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1: no fit: 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.; waiting
I0920 06:50:20.106636  108489 factory.go:615] Updating pod condition for taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 to (PodScheduled==False, Reason=Unschedulable)
I0920 06:50:20.108366  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.456828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0920 06:50:20.108368  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/pods/testpod-1: (1.543178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0920 06:50:20.108945  108489 generic_scheduler.go:337] Preemption will not help schedule pod taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 on any node.
I0920 06:50:20.215526  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.215533  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.215692  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.215752  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.215989  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.215991  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.252686  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.252747  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.252747  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.252736  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.254371  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.254384  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.308466  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.425955  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.490190  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.499200  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.499205  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.499279  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.499324  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.499440  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.506929  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.506929  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:50:20.532499  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 1m0.028836323s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532575  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 1m0.028922824s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532592  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 1m0.028939944s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532605  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 1m0.028951182s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532660  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 1m0.029146483s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532672  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 1m0.029158706s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532684  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 1m0.029170212s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532718  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 1m0.029203961s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532750  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 1m0.02915078s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532761  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 1m0.029161745s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532772  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 1m0.029172099s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:50:20.532782  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 1m0.029182142s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:49:25 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds (35.21s)
        taint_test.go:782: Failed to taint node in test 2 <node-2>, err: timed out waiting for the condition

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-063834.xml

Find taint-based-evictionsab06a422-210e-4398-a535-6b106be64cc2/testpod-1 mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds
W0920 06:48:36.401557  108489 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 06:48:36.401837  108489 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 06:48:36.401974  108489 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 06:48:36.402045  108489 master.go:259] Using reconciler: 
I0920 06:48:36.404610  108489 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.405047  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.405257  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.406656  108489 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 06:48:36.406695  108489 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.406972  108489 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 06:48:36.407025  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.407200  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.408937  108489 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 06:48:36.409006  108489 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 06:48:36.408987  108489 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.409064  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.409614  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.409648  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.409848  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.410604  108489 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 06:48:36.410644  108489 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.410669  108489 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 06:48:36.410819  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.410843  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.411525  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.412604  108489 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 06:48:36.412687  108489 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 06:48:36.412800  108489 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.412964  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.412986  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.413759  108489 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 06:48:36.413867  108489 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 06:48:36.414093  108489 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.414310  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.414389  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.414939  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.415204  108489 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 06:48:36.415397  108489 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 06:48:36.415402  108489 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.415398  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.415572  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.415592  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.416127  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.416471  108489 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 06:48:36.416663  108489 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 06:48:36.416667  108489 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.416871  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.416891  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.417602  108489 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 06:48:36.417673  108489 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 06:48:36.417828  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.417898  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.417949  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.417971  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.418901  108489 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 06:48:36.418943  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.418944  108489 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 06:48:36.419065  108489 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.419187  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.419206  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.419960  108489 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 06:48:36.420047  108489 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 06:48:36.420052  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.420288  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.420468  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.420489  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.420894  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.421534  108489 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 06:48:36.421562  108489 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 06:48:36.422254  108489 watch_cache.go:405] Replace watchCache (rev: 49714) 
I0920 06:48:36.422547  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.422689  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.422731  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.423479  108489 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 06:48:36.423541  108489 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 06:48:36.423676  108489 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.423888  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.423918  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.424661  108489 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 06:48:36.424570  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.424792  108489 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 06:48:36.424843  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.424969  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.424989  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.425791  108489 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 06:48:36.425832  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.425970  108489 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 06:48:36.425984  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.425991  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.426005  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.426796  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.427092  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.427121  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.428053  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.428217  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.428242  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.429088  108489 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 06:48:36.429116  108489 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 06:48:36.429153  108489 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 06:48:36.429569  108489 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.429784  108489 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.430096  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.430375  108489 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.430940  108489 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.431426  108489 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.432010  108489 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.432303  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.432406  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.432540  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.432968  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.433426  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.433577  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.434184  108489 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.434371  108489 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.434735  108489 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.434971  108489 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.435532  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.435751  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.435908  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.436036  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.436220  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.436461  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.436802  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.437498  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.437851  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.438604  108489 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.439369  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.439717  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.440078  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.440791  108489 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.441199  108489 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.442075  108489 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.443160  108489 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.443833  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.444520  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.444834  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.445030  108489 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 06:48:36.445131  108489 master.go:461] Enabling API group "authentication.k8s.io".
I0920 06:48:36.445229  108489 master.go:461] Enabling API group "authorization.k8s.io".
I0920 06:48:36.445428  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.445673  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.445775  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.447206  108489 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 06:48:36.447389  108489 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 06:48:36.447395  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.448377  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.448615  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.448516  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.449848  108489 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 06:48:36.449955  108489 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 06:48:36.450026  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.450191  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.450210  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.450906  108489 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 06:48:36.451281  108489 master.go:461] Enabling API group "autoscaling".
I0920 06:48:36.451515  108489 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.451806  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.451941  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.451126  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.451186  108489 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 06:48:36.453042  108489 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 06:48:36.453145  108489 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 06:48:36.453367  108489 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.453567  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.453645  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.453937  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.454433  108489 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 06:48:36.454464  108489 master.go:461] Enabling API group "batch".
I0920 06:48:36.454616  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.454769  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.454792  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.454873  108489 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 06:48:36.455545  108489 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 06:48:36.455774  108489 master.go:461] Enabling API group "certificates.k8s.io".
I0920 06:48:36.455641  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.455681  108489 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 06:48:36.456056  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.456658  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.456779  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.456772  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.456959  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.457807  108489 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 06:48:36.457938  108489 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 06:48:36.457982  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.458131  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.458149  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.458936  108489 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 06:48:36.459008  108489 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 06:48:36.458963  108489 master.go:461] Enabling API group "coordination.k8s.io".
I0920 06:48:36.459183  108489 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 06:48:36.459397  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.459721  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.459992  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.460119  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.460150  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.460901  108489 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 06:48:36.460934  108489 master.go:461] Enabling API group "extensions".
I0920 06:48:36.460949  108489 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 06:48:36.461083  108489 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.461195  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.461222  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.461674  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.462686  108489 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 06:48:36.462854  108489 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 06:48:36.462870  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.462999  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.463018  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.464969  108489 watch_cache.go:405] Replace watchCache (rev: 49715) 
I0920 06:48:36.468747  108489 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 06:48:36.468792  108489 master.go:461] Enabling API group "networking.k8s.io".
I0920 06:48:36.468794  108489 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 06:48:36.468841  108489 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.469104  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.469132  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.469656  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.471286  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:36.471584  108489 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 06:48:36.471626  108489 master.go:461] Enabling API group "node.k8s.io".
I0920 06:48:36.471680  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:36.471750  108489 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 06:48:36.471865  108489 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.472017  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:36.472025  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.472053  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.472499  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.473148  108489 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 06:48:36.473211  108489 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 06:48:36.473334  108489 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.473551  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.473572  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.473797  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:36.474007  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:36.474226  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.474963  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:36.476103  108489 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 06:48:36.477807  108489 master.go:461] Enabling API group "policy".
I0920 06:48:36.476112  108489 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 06:48:36.477523  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:36.477976  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.478769  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.478802  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.478953  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.479482  108489 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 06:48:36.479675  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.479767  108489 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 06:48:36.480082  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.480171  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.480567  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.481014  108489 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 06:48:36.481051  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.481186  108489 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 06:48:36.481311  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.481456  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.482472  108489 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 06:48:36.482650  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.482652  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.482911  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.482928  108489 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 06:48:36.482933  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.483972  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.485770  108489 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 06:48:36.485816  108489 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 06:48:36.485853  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.486055  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.486115  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.486878  108489 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 06:48:36.486936  108489 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 06:48:36.487070  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.487228  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.487255  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.487950  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.488017  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.488177  108489 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 06:48:36.488327  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.488193  108489 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 06:48:36.488664  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.488765  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.489633  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.490011  108489 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 06:48:36.490072  108489 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 06:48:36.490236  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.490392  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.490416  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.491058  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.491860  108489 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 06:48:36.491896  108489 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 06:48:36.491954  108489 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 06:48:36.492642  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.493780  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.493943  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.493971  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.494915  108489 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 06:48:36.495000  108489 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 06:48:36.495165  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.495338  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.495383  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.495992  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.496487  108489 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 06:48:36.496515  108489 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 06:48:36.496517  108489 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 06:48:36.496777  108489 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 06:48:36.496969  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.497147  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.497168  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.497306  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.498008  108489 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 06:48:36.498068  108489 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 06:48:36.498204  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.498377  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.498403  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.499410  108489 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 06:48:36.499456  108489 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 06:48:36.499505  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.500341  108489 watch_cache.go:405] Replace watchCache (rev: 49716) 
I0920 06:48:36.500569  108489 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.500815  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.500837  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.503107  108489 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 06:48:36.503186  108489 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.503349  108489 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 06:48:36.503508  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.503569  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.504634  108489 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 06:48:36.504731  108489 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 06:48:36.504947  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.505448  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.505482  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.505872  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.506499  108489 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 06:48:36.506567  108489 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 06:48:36.507043  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.507224  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.507251  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.507537  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.507765  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.508043  108489 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 06:48:36.508062  108489 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 06:48:36.508193  108489 master.go:461] Enabling API group "storage.k8s.io".
I0920 06:48:36.508491  108489 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.508777  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.508939  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.509514  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.509817  108489 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 06:48:36.509901  108489 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 06:48:36.510018  108489 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.510164  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.510196  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.510869  108489 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 06:48:36.510914  108489 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 06:48:36.511038  108489 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.511168  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.511186  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.511282  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.512018  108489 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 06:48:36.512085  108489 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 06:48:36.512405  108489 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.512518  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.512533  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.512690  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.513499  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.513625  108489 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 06:48:36.513669  108489 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 06:48:36.514492  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.514254  108489 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.515143  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.515241  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.515962  108489 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 06:48:36.515989  108489 master.go:461] Enabling API group "apps".
I0920 06:48:36.516043  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.516189  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.516219  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.516294  108489 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 06:48:36.516903  108489 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 06:48:36.516944  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.517079  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.517100  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.517177  108489 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 06:48:36.517952  108489 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 06:48:36.518027  108489 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 06:48:36.517994  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.518140  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.518157  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.518765  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.518817  108489 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 06:48:36.518853  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.519021  108489 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 06:48:36.519150  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.519257  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.519391  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.519565  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.519848  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.520165  108489 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 06:48:36.520187  108489 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 06:48:36.520220  108489 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.520302  108489 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 06:48:36.520491  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:36.520518  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:36.520905  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.521589  108489 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 06:48:36.521630  108489 master.go:461] Enabling API group "events.k8s.io".
I0920 06:48:36.521668  108489 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 06:48:36.522298  108489 watch_cache.go:405] Replace watchCache (rev: 49717) 
I0920 06:48:36.522675  108489 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.523155  108489 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.523520  108489 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.523785  108489 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.524053  108489 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.524222  108489 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.524498  108489 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.524689  108489 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.524918  108489 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.525103  108489 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.526120  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.526548  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.527525  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.527855  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.528905  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.529450  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.530619  108489 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.531238  108489 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.532198  108489 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.532509  108489 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:48:36.532648  108489 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 06:48:36.533405  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.534226  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.534884  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.535694  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.536525  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.537401  108489 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.537782  108489 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.538674  108489 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.539617  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.539911  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.540621  108489 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:48:36.540788  108489 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 06:48:36.542872  108489 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.543368  108489 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.544173  108489 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.544848  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.545536  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.546632  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.547175  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.547654  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.548171  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.548883  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.549467  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:48:36.549574  108489 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0920 06:48:36.550224  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.551009  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:48:36.551074  108489 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0920 06:48:36.551692  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.552351  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.552543  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.553082  108489 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.553530  108489 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.554070  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.554624  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:48:36.554684  108489 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0920 06:48:36.555381  108489 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.556192  108489 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.556519  108489 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.557196  108489 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.557434  108489 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.557687  108489 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.558219  108489 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.558470  108489 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.558782  108489 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.559430  108489 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.559656  108489 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.559899  108489 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:48:36.559956  108489 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0920 06:48:36.559963  108489 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0920 06:48:36.560499  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.561236  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.562013  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.562564  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.563463  108489 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0664be19-d89c-46c9-bca4-64fbea908a9d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:48:36.569186  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.569269  108489 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0920 06:48:36.569327  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.569342  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.569353  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.569383  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.569420  108489 httplog.go:90] GET /healthz: (364.284µs) 0 [Go-http-client/1.1 127.0.0.1:34104]
I0920 06:48:36.570692  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.630119ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:36.573625  108489 httplog.go:90] GET /api/v1/services: (1.3969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:36.579505  108489 httplog.go:90] GET /api/v1/services: (2.048894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:36.584468  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.584507  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.584521  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.584531  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.584541  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.584577  108489 httplog.go:90] GET /healthz: (321.859µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:36.589943  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.658655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34104]
I0920 06:48:36.590541  108489 httplog.go:90] GET /api/v1/services: (3.563082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:36.590552  108489 httplog.go:90] GET /api/v1/services: (4.72368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:36.593091  108489 httplog.go:90] POST /api/v1/namespaces: (2.644295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34104]
I0920 06:48:36.596800  108489 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.285433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:36.607390  108489 httplog.go:90] POST /api/v1/namespaces: (9.988393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:36.613769  108489 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (5.524664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:36.617695  108489 httplog.go:90] POST /api/v1/namespaces: (3.166934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:36.670649  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.670796  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.671026  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.671132  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.671205  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.671421  108489 httplog.go:90] GET /healthz: (1.255721ms) 0 [Go-http-client/1.1 127.0.0.1:34108]
I0920 06:48:36.686379  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.686427  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.686439  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.686449  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.686457  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.686487  108489 httplog.go:90] GET /healthz: (269.545µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:36.770344  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.770381  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.770394  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.770403  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.770412  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.770452  108489 httplog.go:90] GET /healthz: (317.789µs) 0 [Go-http-client/1.1 127.0.0.1:34108]
I0920 06:48:36.786261  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.786357  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.786372  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.786381  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.786391  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.786438  108489 httplog.go:90] GET /healthz: (345.969µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:36.870359  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.870520  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.870606  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.870692  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.870802  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.870999  108489 httplog.go:90] GET /healthz: (798.491µs) 0 [Go-http-client/1.1 127.0.0.1:34108]
I0920 06:48:36.886339  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.886387  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.886400  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.886410  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.886419  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.886554  108489 httplog.go:90] GET /healthz: (282.071µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:36.970308  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.970363  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.970377  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.970387  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.970396  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.970436  108489 httplog.go:90] GET /healthz: (293.511µs) 0 [Go-http-client/1.1 127.0.0.1:34108]
I0920 06:48:36.986259  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:36.986321  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:36.986335  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:36.986345  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:36.986355  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:36.986387  108489 httplog.go:90] GET /healthz: (290.882µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.070314  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:37.070365  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.070378  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.070387  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.070395  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.070432  108489 httplog.go:90] GET /healthz: (304.916µs) 0 [Go-http-client/1.1 127.0.0.1:34108]
I0920 06:48:37.086171  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:37.086205  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.086216  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.086223  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.086231  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.086268  108489 httplog.go:90] GET /healthz: (250.405µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.170388  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:37.170423  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.170445  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.170456  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.170464  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.170506  108489 httplog.go:90] GET /healthz: (371.727µs) 0 [Go-http-client/1.1 127.0.0.1:34108]
I0920 06:48:37.186302  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:37.186347  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.186362  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.186371  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.186387  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.186420  108489 httplog.go:90] GET /healthz: (291.877µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.270305  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:37.270340  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.270353  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.270363  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.270371  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.270414  108489 httplog.go:90] GET /healthz: (286.034µs) 0 [Go-http-client/1.1 127.0.0.1:34108]
I0920 06:48:37.286203  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:37.286242  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.286255  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.286265  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.286273  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.286305  108489 httplog.go:90] GET /healthz: (265.41µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.370232  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:37.370265  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.370279  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.370288  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.370297  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.370335  108489 httplog.go:90] GET /healthz: (268.93µs) 0 [Go-http-client/1.1 127.0.0.1:34108]
I0920 06:48:37.386176  108489 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 06:48:37.386208  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.386221  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.386230  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.386238  108489 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.386297  108489 httplog.go:90] GET /healthz: (270.797µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.401663  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:48:37.401827  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:48:37.471470  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:37.472186  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:37.473100  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.473127  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.473137  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.473144  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.473202  108489 httplog.go:90] GET /healthz: (1.667081ms) 0 [Go-http-client/1.1 127.0.0.1:34108]
I0920 06:48:37.473469  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:37.473881  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:37.474551  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:37.475196  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:37.478033  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:37.487009  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.487047  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.487058  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.487070  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.487127  108489 httplog.go:90] GET /healthz: (1.043966ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.571098  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.200381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.571106  108489 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.171161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.575043  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.575072  108489 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 06:48:37.575084  108489 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 06:48:37.575092  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 06:48:37.575129  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.707377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34320]
I0920 06:48:37.575133  108489 httplog.go:90] GET /healthz: (4.283316ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:37.576476  108489 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (4.854391ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.576734  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.123263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34320]
I0920 06:48:37.578160  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (956.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.579128  108489 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (7.518426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.579322  108489 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0920 06:48:37.579444  108489 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.163872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.579858  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.072963ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.580602  108489 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.063506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.581227  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (971.137µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.583140  108489 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.128704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.583358  108489 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0920 06:48:37.583382  108489 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0920 06:48:37.589241  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.589300  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:37.589351  108489 httplog.go:90] GET /healthz: (3.261046ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.594052  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (12.404405ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34108]
I0920 06:48:37.596253  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.55826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.599075  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.395671ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.607601  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.538416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.610071  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.665831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.612342  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.68408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.613320  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0920 06:48:37.614447  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (863.885µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.616682  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.829437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.616958  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0920 06:48:37.618154  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (924.463µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.620438  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.547878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.620665  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0920 06:48:37.622324  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.429877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.624513  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.68543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.624774  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0920 06:48:37.625990  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (968.447µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.627857  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.455688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.628110  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0920 06:48:37.629282  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (971.44µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.631517  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.586922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.631737  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0920 06:48:37.632777  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (842.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.634535  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.345907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.634837  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0920 06:48:37.636079  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (949.53µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.637988  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.638239  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0920 06:48:37.639257  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (836.527µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.642200  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.442088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.642545  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0920 06:48:37.643760  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (905.728µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.646237  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.018953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.646592  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0920 06:48:37.647797  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (954.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.650026  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.741167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.650240  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0920 06:48:37.651348  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (890.259µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.654020  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.250537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.654494  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0920 06:48:37.656156  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.451835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.658467  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.816178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.658638  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0920 06:48:37.659963  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.094153ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.662098  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.684142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.662337  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0920 06:48:37.669804  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (2.236447ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.673819  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.229315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.674454  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0920 06:48:37.675242  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.675279  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:37.675351  108489 httplog.go:90] GET /healthz: (3.687986ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:37.676086  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.329086ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.678065  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.576218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.678299  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0920 06:48:37.680641  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (2.059974ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.684066  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.923524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.684317  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0920 06:48:37.686968  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (2.362001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.687733  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.687867  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:37.688091  108489 httplog.go:90] GET /healthz: (1.774863ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.689257  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.735566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.689488  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 06:48:37.690538  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (838.081µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.692647  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.703649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.693067  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0920 06:48:37.694159  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (808.307µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.696210  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.60596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.696754  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0920 06:48:37.697939  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (910.648µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.699900  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.473014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.700204  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0920 06:48:37.702618  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (2.244358ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.704573  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.530509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.704878  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0920 06:48:37.705987  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (910.078µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.708494  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.969431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.708889  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0920 06:48:37.710263  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.066723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.712242  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.54837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.712429  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0920 06:48:37.713565  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (946.084µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.715656  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.704538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.715923  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0920 06:48:37.718101  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.984143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.721452  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.830198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.721925  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0920 06:48:37.724185  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (2.048354ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.726524  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.91334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.727110  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0920 06:48:37.728626  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.171997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.731359  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.106221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.732564  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 06:48:37.734246  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.35195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.736438  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.628246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.736797  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 06:48:37.738222  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.135112ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.740550  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.77577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.740837  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 06:48:37.742047  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (970.287µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.744124  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.563384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.744403  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 06:48:37.745676  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.077779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.747883  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.720068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.748193  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 06:48:37.749279  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (885.348µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.751974  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.13426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.752333  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 06:48:37.753648  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.094017ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.755922  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.81064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.756234  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 06:48:37.757422  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (950.157µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.759613  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.682755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.759939  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 06:48:37.761248  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.015599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.765004  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.063537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.766033  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 06:48:37.767165  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (822.589µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.769640  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.003591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.769885  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 06:48:37.770852  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.770879  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:37.770903  108489 httplog.go:90] GET /healthz: (909.222µs) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:37.771456  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.411634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.773350  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.4629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.773568  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0920 06:48:37.774811  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (885.832µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.777139  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.860288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.777482  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 06:48:37.778922  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.130803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.781978  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.893449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.782242  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0920 06:48:37.783317  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (870.031µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.785906  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.046966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.786104  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 06:48:37.786686  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.786792  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:37.786829  108489 httplog.go:90] GET /healthz: (950.619µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.787114  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (858.185µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.788910  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.367741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.789176  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 06:48:37.790228  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (894.397µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.792638  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.893581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.792881  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 06:48:37.793950  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (845.723µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.795860  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.535432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.796132  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 06:48:37.797136  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (816.168µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.798872  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.403384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.799130  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 06:48:37.800263  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (853.757µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.802506  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.700451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.802688  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0920 06:48:37.804330  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.312344ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.807321  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.575399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.807573  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 06:48:37.808841  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (959.818µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.811228  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.855134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.811490  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0920 06:48:37.812597  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (927.1µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.814905  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.809519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.815154  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 06:48:37.816233  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (818.165µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.819145  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.504192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.819400  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 06:48:37.823173  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.612342ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.827999  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.574116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.828485  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 06:48:37.829765  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (997.596µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.832359  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.893075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.832620  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 06:48:37.852105  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.591878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.871314  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.871346  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:37.871392  108489 httplog.go:90] GET /healthz: (1.216795ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:37.873081  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.766081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.873325  108489 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 06:48:37.888500  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.888541  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:37.888585  108489 httplog.go:90] GET /healthz: (2.532286ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.892564  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (2.064349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.912852  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.320471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.913221  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0920 06:48:37.932323  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.82791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.953369  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.687185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.954079  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0920 06:48:37.971876  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.369181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:37.971975  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.972000  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:37.972038  108489 httplog.go:90] GET /healthz: (1.965006ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:37.987372  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:37.987401  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:37.987440  108489 httplog.go:90] GET /healthz: (1.33476ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.992795  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.247773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:37.993089  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0920 06:48:38.015913  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.648971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.044629  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (14.168394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.044875  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0920 06:48:38.052153  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.697848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.073073  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.073108  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.073164  108489 httplog.go:90] GET /healthz: (3.102438ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:38.079109  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.648377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.079485  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0920 06:48:38.087309  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.087342  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.087385  108489 httplog.go:90] GET /healthz: (1.372868ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.091980  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.556546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.113005  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.502968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.113305  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 06:48:38.132044  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.547216ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.153087  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.564667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.153385  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0920 06:48:38.172257  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.172288  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.172354  108489 httplog.go:90] GET /healthz: (1.097517ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:38.173320  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.063693ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.187474  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.187504  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.187562  108489 httplog.go:90] GET /healthz: (1.509004ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.192602  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.149848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.192883  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0920 06:48:38.212068  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.497024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.233404  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.887727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.233725  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0920 06:48:38.256117  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (5.623138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.273065  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.273100  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.273144  108489 httplog.go:90] GET /healthz: (3.062788ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:38.273286  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.66575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.274372  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0920 06:48:38.287047  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.287082  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.287126  108489 httplog.go:90] GET /healthz: (1.108249ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.294219  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (3.756624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.313230  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.709638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.313481  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 06:48:38.332344  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.845258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.353153  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.542421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.353464  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 06:48:38.372209  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.372272  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.372320  108489 httplog.go:90] GET /healthz: (2.265253ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:38.373903  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.6802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.388009  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.388056  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.388114  108489 httplog.go:90] GET /healthz: (1.741452ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.393305  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.721071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.393627  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 06:48:38.413262  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (2.784321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.433102  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.581787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.433426  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 06:48:38.452577  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.120492ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.471640  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:38.472371  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:38.473295  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.473320  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.473365  108489 httplog.go:90] GET /healthz: (3.231187ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:38.474095  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.335295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.474233  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:38.474563  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 06:48:38.474674  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:38.475383  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:38.475456  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:38.478208  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:38.487053  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.487088  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.487129  108489 httplog.go:90] GET /healthz: (1.039522ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.491883  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.431372ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.513116  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.635648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.513387  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 06:48:38.532250  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.643423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.553033  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.520871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.553480  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 06:48:38.573525  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.573560  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.573608  108489 httplog.go:90] GET /healthz: (2.642035ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:38.573720  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.738358ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.587086  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.587112  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.587152  108489 httplog.go:90] GET /healthz: (1.129686ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.603757  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (13.274573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.604091  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 06:48:38.614127  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.603561ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.633174  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.634988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.633836  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 06:48:38.652014  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.478846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.672029  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.672059  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.672100  108489 httplog.go:90] GET /healthz: (2.01837ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:38.674610  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.961801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.674961  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 06:48:38.687179  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.687211  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.687249  108489 httplog.go:90] GET /healthz: (1.202054ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.691658  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.036598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.715358  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.371039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.715623  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0920 06:48:38.733679  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.246741ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.753161  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.626043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.753821  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 06:48:38.771538  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.771569  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.771609  108489 httplog.go:90] GET /healthz: (1.512427ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:38.772077  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.690658ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.787453  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.787485  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.787530  108489 httplog.go:90] GET /healthz: (1.479668ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.792758  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.311177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.793010  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0920 06:48:38.812062  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.517483ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.833450  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.936109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.833763  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 06:48:38.852213  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.508362ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.876314  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.876347  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.876396  108489 httplog.go:90] GET /healthz: (1.486786ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:38.879615  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.303042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.879996  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 06:48:38.888658  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.888691  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.888776  108489 httplog.go:90] GET /healthz: (2.786731ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.892302  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.437428ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.913421  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.904314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.913692  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 06:48:38.931896  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.471756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.953278  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.713506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.953983  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 06:48:38.981621  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.981659  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.981729  108489 httplog.go:90] GET /healthz: (10.913784ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:38.981759  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (11.185984ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:38.987163  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:38.987192  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:38.987232  108489 httplog.go:90] GET /healthz: (1.177708ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.993579  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.638784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:38.993956  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 06:48:39.012112  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.556129ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.032742  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.20968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.033028  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0920 06:48:39.052369  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.735455ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.073212  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.073247  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.073296  108489 httplog.go:90] GET /healthz: (3.256179ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:39.073362  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.72747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.073840  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 06:48:39.087194  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.087232  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.087277  108489 httplog.go:90] GET /healthz: (1.174295ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.092059  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.571141ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.113119  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.570207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.113438  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0920 06:48:39.131766  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.297354ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.153074  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.545054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.153487  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 06:48:39.171521  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.014714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.171547  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.171570  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.171607  108489 httplog.go:90] GET /healthz: (1.516331ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:39.187233  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.187275  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.187318  108489 httplog.go:90] GET /healthz: (1.235795ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.192904  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.446616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.193183  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 06:48:39.212076  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.54108ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.232830  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.277205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.233135  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 06:48:39.252001  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.47181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.271636  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.271675  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.271733  108489 httplog.go:90] GET /healthz: (1.626796ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:39.273967  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.105714ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.274271  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 06:48:39.287274  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.287315  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.287366  108489 httplog.go:90] GET /healthz: (1.239765ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.291964  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.496243ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.313119  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.584797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.313464  108489 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 06:48:39.332159  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.607886ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.334347  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.52569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.352976  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.516748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.353259  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0920 06:48:39.373380  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.373412  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.373453  108489 httplog.go:90] GET /healthz: (3.418902ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:39.373546  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.783836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.375268  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.260199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.387397  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.387435  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.387480  108489 httplog.go:90] GET /healthz: (1.454265ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.393121  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.695941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.393419  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 06:48:39.412274  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.790304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.414232  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.427541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.432596  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.143541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.432953  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 06:48:39.452051  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.545169ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.453983  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.448951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.471856  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:39.472537  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.472553  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:39.472564  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.472610  108489 httplog.go:90] GET /healthz: (1.511385ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:39.472961  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.822886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.473244  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 06:48:39.474388  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:39.474907  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:39.475910  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:39.475923  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:39.478385  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:39.487315  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.487350  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.487403  108489 httplog.go:90] GET /healthz: (1.311901ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.491963  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.408072ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.493879  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.448701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.513131  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.635492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.513671  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 06:48:39.531958  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.434437ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.533947  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.531486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.555943  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (5.441141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.556358  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 06:48:39.573204  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.573243  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.573286  108489 httplog.go:90] GET /healthz: (3.257513ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:39.573356  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.199947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.575115  108489 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.239693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.587210  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.587270  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.587365  108489 httplog.go:90] GET /healthz: (1.084762ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.592455  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.968947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.592728  108489 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 06:48:39.611809  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.300615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.614585  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.708992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.633469  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.915839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.633776  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0920 06:48:39.652024  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.570919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.654163  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.404175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.671628  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.671656  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.671720  108489 httplog.go:90] GET /healthz: (1.549396ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:39.672254  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.810124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.672504  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 06:48:39.687232  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.687267  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.687319  108489 httplog.go:90] GET /healthz: (1.268042ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.691838  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.31393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.693739  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.433329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.713032  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.510787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.713365  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 06:48:39.731825  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.345625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.733894  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.62132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.753042  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.533724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.753382  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 06:48:39.771055  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.771085  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.771120  108489 httplog.go:90] GET /healthz: (1.047932ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I0920 06:48:39.772087  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.398429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.774054  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.196967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.787204  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.787236  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.787308  108489 httplog.go:90] GET /healthz: (1.152934ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.793115  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.579349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.793376  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 06:48:39.811866  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.43676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.814123  108489 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.638684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.832972  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.52107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.833245  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 06:48:39.852165  108489 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.617167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.854672  108489 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.911866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:39.871508  108489 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 06:48:39.871621  108489 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 06:48:39.871781  108489 httplog.go:90] GET /healthz: (1.633962ms) 0 [Go-http-client/1.1 127.0.0.1:34318]
I0920 06:48:39.880671  108489 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (10.210558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.880981  108489 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 06:48:39.895923  108489 httplog.go:90] GET /healthz: (9.87545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.898326  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.55841ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.901401  108489 httplog.go:90] POST /api/v1/namespaces: (2.476622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.903234  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.174013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.908238  108489 httplog.go:90] POST /api/v1/namespaces/default/services: (4.408828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.910602  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.638725ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.913191  108489 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.970503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.971230  108489 httplog.go:90] GET /healthz: (983.341µs) 200 [Go-http-client/1.1 127.0.0.1:34102]
W0920 06:48:39.972894  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.972971  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.973123  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.973278  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.973438  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.973533  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.973627  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.973735  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.973832  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.973940  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.974032  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:39.974204  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 06:48:39.974388  108489 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0920 06:48:39.974475  108489 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 06:48:39.974948  108489 shared_informer.go:197] Waiting for caches to sync for scheduler
I0920 06:48:39.975675  108489 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0920 06:48:39.977941  108489 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0920 06:48:39.979587  108489 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (1.038867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I0920 06:48:39.981330  108489 get.go:251] Starting watch for /api/v1/pods, rv=49715 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m23s
I0920 06:48:40.075305  108489 shared_informer.go:227] caches populated
I0920 06:48:40.075492  108489 shared_informer.go:204] Caches are synced for scheduler 
I0920 06:48:40.075963  108489 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.075994  108489 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076034  108489 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076047  108489 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076063  108489 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076081  108489 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076128  108489 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076137  108489 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076155  108489 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076145  108489 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.075969  108489 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076243  108489 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076286  108489 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076305  108489 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076509  108489 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076519  108489 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076591  108489 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076603  108489 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076608  108489 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.076620  108489 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.077510  108489 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (416.834µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34994]
I0920 06:48:40.077536  108489 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (421.466µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35000]
I0920 06:48:40.077515  108489 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (311.355µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35002]
I0920 06:48:40.077515  108489 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (493.812µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0920 06:48:40.077599  108489 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (482.774µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34996]
I0920 06:48:40.077858  108489 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (528.296µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35008]
I0920 06:48:40.077918  108489 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (339.749µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35010]
I0920 06:48:40.077918  108489 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (321.676µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35004]
I0920 06:48:40.078109  108489 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=49717 labels= fields= timeout=8m33s
I0920 06:48:40.078302  108489 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (336.633µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35000]
I0920 06:48:40.078353  108489 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (321.8µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35002]
I0920 06:48:40.078533  108489 get.go:251] Starting watch for /api/v1/services, rv=49907 labels= fields= timeout=5m8s
I0920 06:48:40.078731  108489 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=49717 labels= fields= timeout=5m49s
I0920 06:48:40.078812  108489 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=49714 labels= fields= timeout=8m29s
I0920 06:48:40.078854  108489 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=49716 labels= fields= timeout=6m2s
I0920 06:48:40.078891  108489 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=49717 labels= fields= timeout=5m30s
I0920 06:48:40.078904  108489 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=49714 labels= fields= timeout=8m28s
I0920 06:48:40.079092  108489 get.go:251] Starting watch for /api/v1/nodes, rv=49714 labels= fields= timeout=6m4s
I0920 06:48:40.079153  108489 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=49717 labels= fields= timeout=5m30s
I0920 06:48:40.079259  108489 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=49715 labels= fields= timeout=6m20s
I0920 06:48:40.175831  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175869  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175881  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175887  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175895  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175903  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175908  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175914  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175920  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175929  108489 shared_informer.go:227] caches populated
I0920 06:48:40.175937  108489 shared_informer.go:227] caches populated
I0920 06:48:40.179193  108489 httplog.go:90] POST /api/v1/namespaces: (2.683306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35012]
I0920 06:48:40.179510  108489 node_lifecycle_controller.go:327] Sending events to api server.
I0920 06:48:40.179576  108489 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0920 06:48:40.179592  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 06:48:40.179661  108489 taint_manager.go:162] Sending events to api server.
I0920 06:48:40.179777  108489 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 06:48:40.179798  108489 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0920 06:48:40.179809  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 06:48:40.179829  108489 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 06:48:40.179866  108489 node_lifecycle_controller.go:488] Starting node controller
I0920 06:48:40.179883  108489 shared_informer.go:197] Waiting for caches to sync for taint
I0920 06:48:40.180097  108489 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.180121  108489 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.181183  108489 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (793.349µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35012]
I0920 06:48:40.182296  108489 get.go:251] Starting watch for /api/v1/namespaces, rv=49955 labels= fields= timeout=8m55s
I0920 06:48:40.280034  108489 shared_informer.go:227] caches populated
I0920 06:48:40.280101  108489 shared_informer.go:227] caches populated
I0920 06:48:40.280106  108489 shared_informer.go:227] caches populated
I0920 06:48:40.280111  108489 shared_informer.go:227] caches populated
I0920 06:48:40.280115  108489 shared_informer.go:227] caches populated
I0920 06:48:40.280119  108489 shared_informer.go:227] caches populated
I0920 06:48:40.280124  108489 shared_informer.go:227] caches populated
I0920 06:48:40.280344  108489 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.280372  108489 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.280417  108489 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.280447  108489 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.280450  108489 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.280464  108489 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0920 06:48:40.281825  108489 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (673.813µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.281883  108489 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (601.578µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35066]
I0920 06:48:40.282482  108489 get.go:251] Starting watch for /api/v1/pods, rv=49715 labels= fields= timeout=6m56s
I0920 06:48:40.282664  108489 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (458.257µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.282976  108489 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=49715 labels= fields= timeout=9m6s
I0920 06:48:40.283384  108489 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=49717 labels= fields= timeout=7m35s
I0920 06:48:40.380076  108489 shared_informer.go:227] caches populated
I0920 06:48:40.380115  108489 shared_informer.go:204] Caches are synced for taint 
I0920 06:48:40.380186  108489 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 06:48:40.380296  108489 shared_informer.go:227] caches populated
I0920 06:48:40.380479  108489 shared_informer.go:227] caches populated
I0920 06:48:40.380490  108489 shared_informer.go:227] caches populated
I0920 06:48:40.380503  108489 shared_informer.go:227] caches populated
I0920 06:48:40.380510  108489 shared_informer.go:227] caches populated
I0920 06:48:40.380517  108489 shared_informer.go:227] caches populated
I0920 06:48:40.380523  108489 shared_informer.go:227] caches populated
I0920 06:48:40.383719  108489 httplog.go:90] POST /api/v1/nodes: (2.698428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.384257  108489 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0920 06:48:40.384353  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 06:48:40.384372  108489 taint_manager.go:438] Updating known taints on node node-0: []
I0920 06:48:40.385421  108489 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (526.103µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:40.387489  108489 httplog.go:90] POST /api/v1/nodes: (3.199068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.387763  108489 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0920 06:48:40.388561  108489 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (557.546µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.390882  108489 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0920 06:48:40.391334  108489 httplog.go:90] POST /api/v1/nodes: (2.592529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35094]
I0920 06:48:40.391695  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:48:40.391726  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 06:48:40.391734  108489 taint_manager.go:438] Updating known taints on node node-2: []
I0920 06:48:40.391751  108489 taint_manager.go:438] Updating known taints on node node-1: []
I0920 06:48:40.392282  108489 httplog.go:90] PATCH /api/v1/nodes/node-0: (5.795472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:40.393011  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.384266648 +0000 UTC m=+232.006446993,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.384266957 +0000 UTC m=+232.006447284,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.384267121 +0000 UTC m=+232.006447454,}] Taint to Node node-0
I0920 06:48:40.393052  108489 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 06:48:40.393205  108489 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.785104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.393867  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.387738102 +0000 UTC m=+232.009918448,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.387738334 +0000 UTC m=+232.009918660,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.387738638 +0000 UTC m=+232.009918960,}] Taint to Node node-1
I0920 06:48:40.393966  108489 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 06:48:40.394943  108489 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/pods: (1.835255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35094]
I0920 06:48:40.395240  108489 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20", Name:"testpod-0"}
I0920 06:48:40.395353  108489 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0
I0920 06:48:40.395384  108489 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0
I0920 06:48:40.395541  108489 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0", node "node-2"
I0920 06:48:40.395665  108489 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0", node "node-2": all PVCs bound and nothing to do
I0920 06:48:40.395833  108489 factory.go:606] Attempting to bind testpod-0 to node-2
I0920 06:48:40.398345  108489 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/pods/testpod-0/binding: (2.106362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.398571  108489 scheduler.go:662] pod taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0 is bound successfully on node "node-2", 3 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0920 06:48:40.399603  108489 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20", Name:"testpod-0"}
I0920 06:48:40.400924  108489 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/events: (1.946696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.402557  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (588.199µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35096]
I0920 06:48:40.406887  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.230176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.407115  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.390827295 +0000 UTC m=+232.013007646,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.39082752 +0000 UTC m=+232.013007841,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.390827766 +0000 UTC m=+232.013008087,}] Taint to Node node-2
I0920 06:48:40.407168  108489 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 06:48:40.472035  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:40.472754  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:40.474577  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:40.475070  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:40.476049  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:40.476387  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:40.478563  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:40.497378  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/pods/testpod-0: (1.738454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.499418  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/pods/testpod-0: (1.402261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.501402  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.469706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.503957  108489 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.024952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.505146  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (570.426µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.508967  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.90979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.509392  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40.504373094 +0000 UTC m=+232.126553435,}] Taint to Node node-2
I0920 06:48:40.510437  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (667.498µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.515448  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (4.154941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.515777  108489 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,}] Taint
I0920 06:48:40.606829  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.03595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.706764  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.9709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.806566  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.869107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:40.907086  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.373668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.007744  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.071984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.078152  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.078424  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.078471  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.078615  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.078810  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.078812  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.108628  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.18339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.207099  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.388903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.282345  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.306914  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.237368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.406692  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.997314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.472236  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.472891  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.474929  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.475321  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.476364  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.476639  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.478736  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:41.506595  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.804849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.606534  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.60888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.706617  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.961059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.806679  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.973088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:41.907189  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.252869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.006747  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.002498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.078266  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.078601  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.078773  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.078894  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.079026  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.078931  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.106724  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.917254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.206301  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.613436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.282637  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.307042  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.317988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.406484  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.642966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.472426  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.473016  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.475293  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.475522  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.476712  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.476815  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.478904  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:42.506524  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.860859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.606904  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.080834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.706955  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.234952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.806427  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.750347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:42.907147  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.983018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.006998  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.199413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.078395  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.078768  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.078903  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.079202  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.079204  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.079230  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.106390  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.715624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.206473  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.734922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.282904  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.306479  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.778518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.383239  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.381184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:48:43.385176  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.503712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:48:43.387115  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.408338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:48:43.406844  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.064007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.472605  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.473235  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.475461  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.475681  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.476887  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.476994  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.479088  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:43.506576  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.874731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.610039  108489 httplog.go:90] GET /api/v1/nodes/node-2: (5.328661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.706453  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.796913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.806738  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.921682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:43.906763  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.866924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.007026  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.354249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.078585  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.078965  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.079379  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.079399  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.079416  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.080195  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.106380  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.724009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.206781  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.078098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.283153  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.306511  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.797019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.406662  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.861844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.472793  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.473488  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.476002  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.476048  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.477054  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.478263  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.479260  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:44.506324  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.637868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.606771  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.833145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.706871  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.183283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.807000  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.168316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:44.906685  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.902568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:45.006763  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.921698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:45.078783  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.079048  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.079558  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.079605  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.079642  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.080337  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.106855  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.173945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:45.206817  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.131105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:45.283335  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.306973  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.137037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:45.380410  108489 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 06:48:45.380454  108489 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 06:48:45.380550  108489 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 06:48:45.380579  108489 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 06:48:45.380586  108489 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 06:48:45.380600  108489 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 06:48:45.380607  108489 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0920 06:48:45.380655  108489 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 06:48:45.381131  108489 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0920 06:48:45.381354  108489 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 06:48:45.381467  108489 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 06:48:45.381449632 +0000 UTC m=+237.003629966. Adding it to the Taint queue.
I0920 06:48:45.381596  108489 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 06:48:45.380741  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"c744d8c0-da5f-4c32-b42a-a0e6cdbac2f6", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 06:48:45.381857  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"363b2219-19b9-47a2-a61b-4e4515d94e2e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 06:48:45.382122  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"0c784d08-4517-435c-8b3d-f56314d174dc", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 06:48:45.386336  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (3.617998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:45.389779  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (670.649µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:45.391486  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (4.175172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:45.393569  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (1.546048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:45.393811  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.805768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:45.394160  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 06:48:45.388823953 +0000 UTC m=+237.011004339,}] Taint to Node node-2
I0920 06:48:45.394198  108489 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 06:48:45.394497  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:48:45.394532  108489 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 06:48:45 +0000 UTC}]
I0920 06:48:45.394590  108489 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0 at 2019-09-20 06:48:45.394581504 +0000 UTC m=+237.016761849 to be fired at 2019-09-20 06:52:05.394581504 +0000 UTC m=+437.016761849
I0920 06:48:45.406353  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.644358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:45.472927  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.473640  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.476179  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.476321  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.477203  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.478399  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.479424  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:45.506839  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.036945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:45.607749  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.486478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:45.706744  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.01927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:45.806515  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.621531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:45.906319  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.636331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.006596  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.893831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.078865  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.079239  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.079768  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.079888  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.079893  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.080514  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.106775  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.986007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.206680  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.865886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.283540  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.306487  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.715855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.406511  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.795443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.473219  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.473796  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.476658  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.476725  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.477539  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.478840  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.479675  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:46.506263  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.605679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.607978  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.279751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.706564  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.889551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.806364  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.682964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:46.906129  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.464587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.006587  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.873895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.079025  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.079521  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.079872  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.080026  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.080282  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.080797  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.106247  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.594741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.206331  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.657204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.283752  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.306651  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.958064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.406592  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.867111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.473384  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.474072  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.476885  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.476884  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.477760  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.479045  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.479905  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:47.507261  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.549275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.610741  108489 httplog.go:90] GET /api/v1/nodes/node-2: (5.799327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.706514  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.864241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.806228  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.580403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:47.906820  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.993227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.007094  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.230906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.079244  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.079692  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.079988  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.080176  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.080430  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.080935  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.106971  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.258973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.206963  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.079135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.283970  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.306874  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.1649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.406908  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.140626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.473594  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.474244  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.477074  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.477118  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.478033  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.479375  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.480207  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:48.506574  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.804893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.607131  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.32136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.706951  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.313788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.806577  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.002376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:48.906818  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.047315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.006374  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.630977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.079489  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.080238  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.080324  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.080444  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.080583  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.081116  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.107210  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.442466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.206459  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.785354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.284184  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.306637  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.930461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.406660  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.489763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.473800  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.474424  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.477500  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.477542  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.478228  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.479534  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.480374  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:49.506515  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.818902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.606497  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.669467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.706810  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.014236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.806446  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.742937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.899034  108489 httplog.go:90] GET /api/v1/namespaces/default: (2.199585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.901131  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.597741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.903213  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.612491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:49.906491  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.412177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:50.006348  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.555025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:50.079681  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.080372  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.080510  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.080819  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.080928  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.081258  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.106257  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.542802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:50.206946  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.998834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:50.284426  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.306670  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.984009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:50.382573  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.001490671s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:48:50.382672  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.00160523s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:48:50.382688  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.001623235s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:48:50.382715  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.001650313s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:48:50.386600  108489 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.362666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:50.386996  108489 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0920 06:48:50.387034  108489 controller_utils.go:124] Update ready status of pods on node [node-0]
I0920 06:48:50.387259  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"c744d8c0-da5f-4c32-b42a-a0e6cdbac2f6", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0920 06:48:50.387753  108489 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (664.288µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:50.388997  108489 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (1.722271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:50.389255  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.007937097s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:48:50.389295  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.007982373s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:48:50.389316  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.008004425s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:48:50.389330  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.008019127s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:48:50.389838  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (1.999374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38292]
I0920 06:48:50.391738  108489 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.042888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:50.392197  108489 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.645783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35092]
I0920 06:48:50.392246  108489 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0920 06:48:50.392269  108489 controller_utils.go:124] Update ready status of pods on node [node-1]
I0920 06:48:50.392529  108489 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"0c784d08-4517-435c-8b3d-f56314d174dc", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0920 06:48:50.392574  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:50.38678446 +0000 UTC m=+242.008964805,}] Taint to Node node-0
I0920 06:48:50.393344  108489 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (339.088µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38294]
I0920 06:48:50.393892  108489 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.380601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:50.394349  108489 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (579.033µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.394335  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.012883353s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 06:48:50.394405  108489 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 06:48:50.394416  108489 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 06:48:50.394425  108489 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 06:48:50.394865  108489 httplog.go:90] POST /api/v1/namespaces/default/events: (2.268239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38292]
I0920 06:48:50.397840  108489 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.491287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38292]
I0920 06:48:50.398202  108489 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.147212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38294]
I0920 06:48:50.398445  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:50.393413935 +0000 UTC m=+242.015594280,}] Taint to Node node-1
I0920 06:48:50.398569  108489 httplog.go:90] PUT /api/v1/nodes/node-2/status: (3.896516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35016]
I0920 06:48:50.398902  108489 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,}] Taint
I0920 06:48:50.398966  108489 cacher.go:777] cacher (*core.Node): 1 objects queued in incoming channel.
I0920 06:48:50.398998  108489 controller_utils.go:204] Added [] Taint to Node node-0
I0920 06:48:50.399018  108489 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 06:48:50.399478  108489 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (846.047µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38292]
I0920 06:48:50.399477  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (308.617µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.400368  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (447.05µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.400619  108489 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (1.489849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38294]
I0920 06:48:50.400874  108489 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,}] Taint
I0920 06:48:50.403596  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:48:50.403627  108489 taint_manager.go:438] Updating known taints on node node-2: []
I0920 06:48:50.403644  108489 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 06:48:50.403655  108489 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0 at 2019-09-20 06:48:50.403652152 +0000 UTC m=+242.025832494
I0920 06:48:50.403835  108489 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.486207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38292]
I0920 06:48:50.403935  108489 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0
I0920 06:48:50.403971  108489 store.go:362] GuaranteedUpdate of /0664be19-d89c-46c9-bca4-64fbea908a9d/minions/node-2 failed because of a conflict, going to retry
I0920 06:48:50.404173  108489 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,}] Taint
I0920 06:48:50.404604  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.860771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:50.405741  108489 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/events: (1.72607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.406149  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (4.678807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38294]
I0920 06:48:50.406407  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:50.399783093 +0000 UTC m=+242.021963429,}] Taint to Node node-2
I0920 06:48:50.406481  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:48:50.406499  108489 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 06:48:45 +0000 UTC}]
I0920 06:48:50.406531  108489 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0 at 2019-09-20 06:48:50.406520563 +0000 UTC m=+242.028700927 to be fired at 2019-09-20 06:52:10.406520563 +0000 UTC m=+442.028700927
I0920 06:48:50.407193  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (614.468µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.407625  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.133725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38292]
I0920 06:48:50.410338  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.276185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.410662  108489 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 06:48:40 +0000 UTC,}] Taint
I0920 06:48:50.474006  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.474570  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.477670  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.477761  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.478421  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.479734  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.480567  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:50.506831  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.092676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.606455  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.681036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.706958  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.293293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.807052  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.209472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:50.907196  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.524938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.006661  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.847429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.079914  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.080561  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.080744  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.080958  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.081121  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.081404  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.106783  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.901169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.206355  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.730687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.284694  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.306695  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.873875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.406392  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.729214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.474246  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.474767  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.477925  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.477935  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.478649  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.479827  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.480748  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:51.507039  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.339096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.608867  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.105286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.706813  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.823771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.807126  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.395067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:51.906483  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.818012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.006947  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.107092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.080085  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.080929  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.081064  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.081130  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.081238  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.081493  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.106882  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.096932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.208119  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.05843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.284905  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.306495  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.713027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.406384  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.7161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.474656  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.475068  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.478141  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.478139  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.478871  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.479969  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.480917  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:52.506951  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.942917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.607205  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.985512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.706521  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.857278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.806595  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.941479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:52.907054  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.272868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:53.006500  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.838621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:53.080274  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.081051  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.081256  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.081325  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.081715  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.081754  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.106437  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.731864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:53.206632  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.841425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:53.285125  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.307255  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.690699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
E0920 06:48:53.383984  108489 factory.go:590] Error getting pod permit-plugin1683d175-8852-4e4d-b7a4-65f8210a961d/signalling-pod for retry: Get http://127.0.0.1:36687/api/v1/namespaces/permit-plugin1683d175-8852-4e4d-b7a4-65f8210a961d/pods/signalling-pod: dial tcp 127.0.0.1:36687: connect: connection refused; retrying...
I0920 06:48:53.384811  108489 httplog.go:90] GET /api/v1/namespaces/default: (2.836379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:48:53.388277  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.485493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:48:53.390792  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.830224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:48:53.406756  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.120698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:53.474924  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.475228  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.478303  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.478331  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.479064  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.480186  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.481113  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:53.507660  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.937384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:53.606887  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.057677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:53.706868  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.171811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:53.807108  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.398116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:53.906868  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.059922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.006596  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.820785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.081252  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.081450  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.082802  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.082825  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.082830  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.082852  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.107301  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.629945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.206400  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.743462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.285359  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.307351  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.661747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.410938  108489 httplog.go:90] GET /api/v1/nodes/node-2: (6.213624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.475167  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.475397  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.478480  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.478539  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.479288  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.480351  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.481289  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:54.509886  108489 httplog.go:90] GET /api/v1/nodes/node-2: (5.165168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.607567  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.741672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.707538  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.751846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.806376  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.548309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:54.906333  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.554421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:55.007019  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.352276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:55.081460  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.081653  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.082896  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.082919  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.082986  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.083029  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.110488  108489 httplog.go:90] GET /api/v1/nodes/node-2: (4.469351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:55.206274  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.506484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:55.285543  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.306528  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.873961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:55.406266  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.025186749s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:48:55.406326  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.025259017s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:48:55.406347  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.025281117s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:48:55.406366  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.025299799s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:48:55.406444  108489 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 06:48:55.406414971 +0000 UTC m=+247.028595320. Adding it to the Taint queue.
I0920 06:48:55.406486  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.025173458s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:48:55.406505  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.02519334s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:48:55.406521  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.025209023s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:48:55.406539  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.025227093s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:48:55.406588  108489 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 06:48:55.406575066 +0000 UTC m=+247.028755409. Adding it to the Taint queue.
I0920 06:48:55.406632  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.025186143s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:48:55.406664  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.02521842s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:48:55.406680  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.025234965s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:48:55.406749  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.025302912s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:48:55.407920  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (805.708µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:55.410621  108489 httplog.go:90] GET /api/v1/nodes/node-2: (5.928147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:48:55.416907  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (7.905397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:55.417236  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:48:55.417240  108489 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-20 06:48:55.406784822 +0000 UTC m=+247.028965169,}] Taint to Node node-2
I0920 06:48:55.417268  108489 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 06:48:45 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 06:48:55 +0000 UTC}]
I0920 06:48:55.419771  108489 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (620.419µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:55.425218  108489 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 06:48:55.425246  108489 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 06:48:55 +0000 UTC}]
I0920 06:48:55.425332  108489 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.206914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:55.425590  108489 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 06:48:55.475358  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.475550  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.478678  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.478674  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.479489  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.480525  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.482812  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:55.506613  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.853089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:55.606602  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.894063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:55.706626  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.947776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:55.807013  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.247308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:55.906746  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.01176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.006657  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.877903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.081612  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.081869  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.083018  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.083119  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.083130  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.083610  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.106649  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.898105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.211626  108489 httplog.go:90] GET /api/v1/nodes/node-2: (4.634632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.285757  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.306438  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.747192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.406873  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.051503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.475491  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.475737  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.478856  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.478879  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.479652  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.480782  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.482968  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:56.506801  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.093488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.606353  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.755276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.708590  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.970324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.806671  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.854155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:56.907842  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.061001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.006604  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.695995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.081863  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.082157  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.083207  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.083223  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.083263  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.083822  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.107772  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.865276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.206877  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.073118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.286057  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.311587  108489 httplog.go:90] GET /api/v1/nodes/node-2: (6.904048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.406858  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.145799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.475741  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.475921  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.479072  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.479867  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.479873  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.481007  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.483100  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:57.509897  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.607050  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.250426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.707905  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.381069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.806580  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.939758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:57.906664  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.9482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.006690  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.974097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.082066  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.082378  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.083296  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.083403  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.083413  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.084013  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.107570  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.879101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.206731  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.941892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.286254  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.306890  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.103565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.406803  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.98016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.476129  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.476517  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.479266  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.480043  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.480219  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.481168  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.483324  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:58.507093  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.253338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.606617  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.801693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.706651  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.014874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.807262  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.558475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:58.906536  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.876474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.008095  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.41181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.082255  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.082590  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.083460  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.083590  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.083645  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.084138  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.111988  108489 httplog.go:90] GET /api/v1/nodes/node-2: (7.310798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.207365  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.666083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.286463  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.306787  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.06056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.406893  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.03788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.476326  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.476696  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.479471  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.480240  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.480562  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.481498  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.483449  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:48:59.512477  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.164214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.606737  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.924993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.706725  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.036575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.807082  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.358883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.899100  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.777456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.901488  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.651574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.903826  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.634361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:48:59.906088  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.536696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.007106  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.209605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.082427  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.082764  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.083622  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.083808  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.083837  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.084266  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.106550  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.862537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.209405  108489 httplog.go:90] GET /api/v1/nodes/node-2: (4.572906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.286694  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.306572  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.904883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.406721  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.030458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.426011  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.044931205s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:00.426079  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.045012232s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:00.426098  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.045032748s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:00.426116  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.04505001s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:00.426204  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.044891377s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:00.426238  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.044925582s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:00.426256  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.044943503s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:00.426277  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.044964532s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:00.426331  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.044885375s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:00.426354  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.0449087s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:49:00.426375  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.044925979s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:49:00.426389  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.044943565s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:49:00.426439  108489 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 06:49:00.426422539 +0000 UTC m=+252.048602884. Adding it to the Taint queue.
I0920 06:49:00.476518  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.476969  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.479657  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.480570  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.480689  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.481654  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.483768  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:00.506186  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.524494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.606680  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.966213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.706521  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.867751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.806854  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.959122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:00.906899  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.05432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.006838  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.040009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.082618  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.082944  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.083795  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.084022  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.084107  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.084448  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.106788  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.070244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.206770  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.946465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.286890  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.307102  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.183541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.407345  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.413483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.476736  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.477173  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.480428  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.480817  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.480986  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.481932  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.483980  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:01.506529  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.866801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.606942  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.094366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.707202  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.499378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.806940  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.945885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:01.906183  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.527034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.007306  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.638495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.082856  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.083174  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.083942  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.084150  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.084383  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.084584  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.106756  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.070739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.206760  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.013803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.287078  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.306834  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.19464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.406601  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.913307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.476925  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.477582  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.480602  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.481008  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.481189  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.482160  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.484135  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:02.506741  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.822458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.606577  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.789467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.706785  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.014013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.806676  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.013459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:02.907137  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.287239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.006899  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.12557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.083038  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.083342  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.084145  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.084315  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.084515  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.084765  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.106657  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.948465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.207013  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.252962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.287470  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.306648  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.860225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.383897  108489 httplog.go:90] GET /api/v1/namespaces/default: (1.743161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:49:03.386183  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.658564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:49:03.388408  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.804134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52156]
I0920 06:49:03.406980  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.224555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.477146  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.477808  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.480737  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.481169  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.481345  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.482346  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.484299  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:03.507774  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.922471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.606650  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.872754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.706845  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.175248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.807653  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.022045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:03.906590  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.904106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.006503  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.781682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.083211  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.083546  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.084307  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.084466  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.084672  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.084839  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.106686  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.979625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.207076  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.315761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.287656  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.307268  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.631005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.407010  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.197505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.477356  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.477930  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.480989  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.481306  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.481529  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.482515  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.484461  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:04.506503  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.821945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.606942  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.11243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.706893  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.117709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.806845  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.01907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:04.906327  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.649027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.007053  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.293178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.083399  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.083782  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.084528  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.084636  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.084901  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.084925  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.106778  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.107978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.206541  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.655055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.287966  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.306570  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.847595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.406527  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.924356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.426812  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.045728883s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:05.426930  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.045858s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:05.426955  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.045888319s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:05.426975  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.045908529s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:05.427070  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.045757015s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:05.427100  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.045787894s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:05.427119  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.045806439s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:05.427142  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.045823228s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:05.427200  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.045754501s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:05.427220  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.045773504s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:49:05.427238  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.045792198s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:49:05.427280  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.045810324s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:49:05.477549  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.478161  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.481171  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.481434  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.481776  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.482661  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.484657  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:05.506393  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.724892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.607130  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.335374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.706739  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.997164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.806843  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.126471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:05.907130  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.304294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.006769  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.065445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.083535  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.084009  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.084725  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.084773  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.085016  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.085028  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.108255  108489 httplog.go:90] GET /api/v1/nodes/node-2: (3.573887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.206542  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.678807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.288399  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.306751  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.938536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.406602  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.952298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.477765  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.478430  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.481342  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.481592  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.481912  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.482832  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.484993  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:06.506953  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.16323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.606390  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.555668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.707229  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.571185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.806752  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.004287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:06.907109  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.3145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.007376  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.285331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.083761  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.084257  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.084988  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.085045  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.085128  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.085246  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.106838  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.036897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.206838  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.09642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.288622  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.306771  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.944416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.407354  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.846901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.477969  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.478589  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.481526  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.481793  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.482065  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.483012  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.487867  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:07.506844  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.046375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.607003  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.215553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.706636  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.826192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.806582  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.899159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:07.906563  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.753763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.006773  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.088341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.083960  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.084437  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.085181  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.085341  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.085203  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.085450  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.106573  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.624867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.207047  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.283162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.288939  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.314561  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.664396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.406858  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.19153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.478208  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.478799  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.481724  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.481947  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.482156  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.483197  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.488045  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:08.506363  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.67459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.606470  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.812898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.706743  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.921378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.806821  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.043045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:08.906636  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.789242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.006566  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.740892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.084183  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.084659  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.085481  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.085547  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.085738  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.085743  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.109318  108489 httplog.go:90] GET /api/v1/nodes/node-2: (4.644991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.206783  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.900636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.289318  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.306745  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.784499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.406318  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.660864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.478456  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.478952  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.481881  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.482103  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.482267  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.483400  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.488252  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:09.506512  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.795983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.606529  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.8585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.706771  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.113159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.806523  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.50125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.899568  108489 httplog.go:90] GET /api/v1/namespaces/default: (2.16363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.901950  108489 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.538393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.904389  108489 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.783829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:09.906520  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.693941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:10.006807  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.015632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:10.084376  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.084880  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.085595  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.085686  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.086082  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.086206  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.106867  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.124199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:10.206929  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.117943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:10.289559  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.306985  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.222212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:10.407167  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.191128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:10.427581  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.046496775s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:10.427647  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.046581262s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:10.427664  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.046598995s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:10.427774  108489 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.046705344s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:10.427880  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.046568423s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:10.427947  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.046633936s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:10.427977  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.046664542s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:10.428072  108489 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.046758503s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:10.428143  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.046698306s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 06:49:10.428162  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.046717602s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:49:10.428196  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.046749534s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:49:10.428218  108489 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.04677253s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 06:48:40 +0000 UTC,LastTransitionTime:2019-09-20 06:48:50 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 06:49:10.478641  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.479122  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.482127  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.482278  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.482375  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.483575  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.488542  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:10.506783  108489 httplog.go:90] GET /api/v1/nodes/node-2: (2.030751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:10.508900  108489 httplog.go:90] GET /api/v1/nodes/node-2: (1.57959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
Sep 20 06:49:10.509: INFO: Waiting up to 15s for pod "testpod-0" in namespace "taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20" to be "updated with tolerationSeconds of 200"
I0920 06:49:10.511150  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/pods/testpod-0: (1.680615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
Sep 20 06:49:10.511: INFO: Pod "testpod-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161863ms
Sep 20 06:49:10.511: INFO: Pod "testpod-0" satisfied condition "updated with tolerationSeconds of 200"
I0920 06:49:10.517518  108489 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20", Name:"testpod-0"}
I0920 06:49:10.517552  108489 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0 at 2019-09-20 06:49:10.517548185 +0000 UTC m=+262.139728531
I0920 06:49:10.517730  108489 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/pods/testpod-0: (5.960196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:10.517807  108489 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/testpod-0
I0920 06:49:10.520412  108489 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/pods/testpod-0: (1.11091ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38298]
I0920 06:49:10.520684  108489 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictionsc58212e0-ac9c-4d15-9b66-87b5857c5b20/events/testpod-0.15c612beb0c04110: (2.721938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:49:10.526595  108489 node_tree.go:113] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I0920 06:49:10.526677  108489 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 06:49:10.528455  108489 node_tree.go:113] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I0920 06:49:10.528490  108489 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 06:49:10.531357  108489 httplog.go:90] DELETE /api/v1/nodes: (10.105157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38296]
I0920 06:49:10.531899  108489 node_tree.go:113] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I0920 06:49:10.532000  108489 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 06:49:11.084568  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.085068  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.085759  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.085925  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.086374  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.086490  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.289762  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.478877  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.479280  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.482335  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.482632  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.482790  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.483798  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 06:49:11.488791  108489 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds (35.13s)
        taint_test.go:782: Failed to taint node in test 0 <node-2>, err: timed out waiting for the condition

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-063834.xml

Find update mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations 34s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations
W0920 06:49:11.532573  108489 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 06:49:11.532604  108489 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 06:49:11.532673  108489 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 06:49:11.532687  108489 master.go:259] Using reconciler: 
I0920 06:49:11.535682  108489 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.536111  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.536291  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.537831  108489 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 06:49:11.537881  108489 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.537925  108489 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 06:49:11.538232  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.538278  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.541336  108489 watch_cache.go:405] Replace watchCache (rev: 54960) 
I0920 06:49:11.542296  108489 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 06:49:11.542350  108489 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.542376  108489 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 06:49:11.542565  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.542590  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.543565  108489 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 06:49:11.543621  108489 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 06:49:11.544009  108489 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.545038  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.545126  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.545143  108489 watch_cache.go:405] Replace watchCache (rev: 54962) 
I0920 06:49:11.545496  108489 watch_cache.go:405] Replace watchCache (rev: 54963) 
I0920 06:49:11.546504  108489 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 06:49:11.546587  108489 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 06:49:11.546694  108489 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.546898  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.546920  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.547442  108489 watch_cache.go:405] Replace watchCache (rev: 54964) 
I0920 06:49:11.547870  108489 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 06:49:11.548011  108489 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.548034  108489 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 06:49:11.548120  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.548135  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.549609  108489 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 06:49:11.549742  108489 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 06:49:11.549808  108489 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.549955  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.549974  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.552158  108489 watch_cache.go:405] Replace watchCache (rev: 54970) 
I0920 06:49:11.552681  108489 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 06:49:11.552865  108489 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.553033  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.553060  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.553058  108489 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 06:49:11.553312  108489 watch_cache.go:405] Replace watchCache (rev: 54966) 
I0920 06:49:11.554851  108489 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 06:49:11.554998  108489 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 06:49:11.555005  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.555133  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.555153  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.555218  108489 watch_cache.go:405] Replace watchCache (rev: 54970) 
I0920 06:49:11.556378  108489 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 06:49:11.556567  108489 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.556589  108489 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 06:49:11.556769  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.556829  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.557591  108489 watch_cache.go:405] Replace watchCache (rev: 54972) 
I0920 06:49:11.559503  108489 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 06:49:11.559692  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.559883  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.559912  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.559997  108489 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 06:49:11.560197  108489 watch_cache.go:405] Replace watchCache (rev: 54972) 
I0920 06:49:11.560730  108489 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 06:49:11.560935  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.561106  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.561127  108489 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 06:49:11.561137  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.561360  108489 watch_cache.go:405] Replace watchCache (rev: 54976) 
I0920 06:49:11.561960  108489 watch_cache.go:405] Replace watchCache (rev: 54977) 
I0920 06:49:11.561983  108489 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 06:49:11.561960  108489 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 06:49:11.562267  108489 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.562434  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.562455  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.564176  108489 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 06:49:11.564301  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.564413  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.564444  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.564518  108489 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 06:49:11.565289  108489 watch_cache.go:405] Replace watchCache (rev: 54979) 
I0920 06:49:11.565768  108489 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 06:49:11.565805  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.565923  108489 watch_cache.go:405] Replace watchCache (rev: 54981) 
I0920 06:49:11.565951  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.565965  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.566037  108489 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 06:49:11.572792  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.572843  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.573798  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.574110  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.574142  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.574777  108489 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 06:49:11.574804  108489 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 06:49:11.575005  108489 watch_cache.go:405] Replace watchCache (rev: 54991) 
I0920 06:49:11.575313  108489 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.575344  108489 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 06:49:11.575538  108489 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.576277  108489 watch_cache.go:405] Replace watchCache (rev: 54991) 
I0920 06:49:11.576364  108489 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.577156  108489 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.578205  108489 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.578770  108489 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.580638  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.580861  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.581068  108489 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.581641  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.582278  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.582576  108489 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.583486  108489 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.583934  108489 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.584683  108489 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.584967  108489 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.585664  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.585973  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.586169  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.586331  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.586695  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.586865  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.587103  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.588224  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.588510  108489 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.589428  108489 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.590277  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.590553  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.591518  108489 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.592682  108489 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.593669  108489 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.594767  108489 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.597024  108489 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.597650  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.600571  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.601385  108489 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.601736  108489 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 06:49:11.603219  108489 master.go:461] Enabling API group "authentication.k8s.io".
I0920 06:49:11.603255  108489 master.go:461] Enabling API group "authorization.k8s.io".
I0920 06:49:11.603770  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.603962  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.603989  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.606366  108489 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 06:49:11.606576  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.609011  108489 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 06:49:11.642452  108489 watch_cache.go:405] Replace watchCache (rev: 55020) 
I0920 06:49:11.643055  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.643093  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.643930  108489 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 06:49:11.644146  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.644242  108489 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 06:49:11.644313  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.644337  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.645601  108489 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 06:49:11.645641  108489 master.go:461] Enabling API group "autoscaling".
I0920 06:49:11.645857  108489 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.645994  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.646186  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.646312  108489 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 06:49:11.646472  108489 watch_cache.go:405] Replace watchCache (rev: 55046) 
I0920 06:49:11.649446  108489 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 06:49:11.649650  108489 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.649672  108489 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 06:49:11.649864  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.649890  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.650592  108489 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 06:49:11.650622  108489 master.go:461] Enabling API group "batch".
I0920 06:49:11.650656  108489 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 06:49:11.650819  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.650925  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.650942  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.651557  108489 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 06:49:11.651581  108489 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 06:49:11.651584  108489 watch_cache.go:405] Replace watchCache (rev: 55047) 
I0920 06:49:11.651593  108489 master.go:461] Enabling API group "certificates.k8s.io".
I0920 06:49:11.651798  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.651973  108489 watch_cache.go:405] Replace watchCache (rev: 55048) 
I0920 06:49:11.652021  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.652044  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.652965  108489 watch_cache.go:405] Replace watchCache (rev: 55049) 
I0920 06:49:11.653106  108489 watch_cache.go:405] Replace watchCache (rev: 55049) 
I0920 06:49:11.653262  108489 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 06:49:11.653298  108489 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 06:49:11.653412  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.653535  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.653567  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.654246  108489 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 06:49:11.654274  108489 master.go:461] Enabling API group "coordination.k8s.io".
I0920 06:49:11.654290  108489 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 06:49:11.654300  108489 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 06:49:11.654430  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.654547  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.654579  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.655161  108489 watch_cache.go:405] Replace watchCache (rev: 55050) 
I0920 06:49:11.655236  108489 watch_cache.go:405] Replace watchCache (rev: 55050) 
I0920 06:49:11.655888  108489 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 06:49:11.656046  108489 master.go:461] Enabling API group "extensions".
I0920 06:49:11.655926  108489 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 06:49:11.656810  108489 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.657000  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.657028  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.667798  108489 watch_cache.go:405] Replace watchCache (rev: 55066) 
I0920 06:49:11.668445  108489 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 06:49:11.668489  108489 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 06:49:11.668657  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.669118  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.669182  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.669967  108489 watch_cache.go:405] Replace watchCache (rev: 55068) 
I0920 06:49:11.672389  108489 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 06:49:11.672426  108489 master.go:461] Enabling API group "networking.k8s.io".
I0920 06:49:11.672459  108489 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 06:49:11.672470  108489 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.672653  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.672668  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.673499  108489 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 06:49:11.673524  108489 master.go:461] Enabling API group "node.k8s.io".
I0920 06:49:11.673775  108489 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.673956  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.673982  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.674071  108489 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 06:49:11.674088  108489 watch_cache.go:405] Replace watchCache (rev: 55069) 
I0920 06:49:11.675559  108489 watch_cache.go:405] Replace watchCache (rev: 55069) 
I0920 06:49:11.676506  108489 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 06:49:11.676745  108489 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.676938  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.676967  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.677053  108489 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 06:49:11.677898  108489 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 06:49:11.677922  108489 master.go:461] Enabling API group "policy".
I0920 06:49:11.677987  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.678155  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.678188  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.678272  108489 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 06:49:11.679027  108489 watch_cache.go:405] Replace watchCache (rev: 55069) 
I0920 06:49:11.682143  108489 watch_cache.go:405] Replace watchCache (rev: 55069) 
I0920 06:49:11.686897  108489 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 06:49:11.687663  108489 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 06:49:11.688198  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.688504  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.688536  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.691440  108489 watch_cache.go:405] Replace watchCache (rev: 55076) 
I0920 06:49:11.691483  108489 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 06:49:11.691535  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.691647  108489 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 06:49:11.691770  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.691795  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.694914  108489 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 06:49:11.695075  108489 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 06:49:11.695104  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.695271  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.695302  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.696188  108489 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 06:49:11.696272  108489 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 06:49:11.696262  108489 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.696396  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.696427  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.697069  108489 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 06:49:11.697188  108489 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 06:49:11.697277  108489 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.697399  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.697426  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.697965  108489 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 06:49:11.698006  108489 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.698134  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.698151  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.698149  108489 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 06:49:11.698613  108489 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 06:49:11.698759  108489 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 06:49:11.698784  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.698905  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.698929  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.699476  108489 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 06:49:11.699514  108489 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 06:49:11.699905  108489 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 06:49:11.701571  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.701734  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.701761  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.702503  108489 watch_cache.go:405] Replace watchCache (rev: 55085) 
I0920 06:49:11.702506  108489 watch_cache.go:405] Replace watchCache (rev: 55085) 
I0920 06:49:11.702656  108489 watch_cache.go:405] Replace watchCache (rev: 55085) 
I0920 06:49:11.703011  108489 watch_cache.go:405] Replace watchCache (rev: 55085) 
I0920 06:49:11.703096  108489 watch_cache.go:405] Replace watchCache (rev: 55085) 
I0920 06:49:11.703522  108489 watch_cache.go:405] Replace watchCache (rev: 55085) 
I0920 06:49:11.705025  108489 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 06:49:11.705103  108489 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 06:49:11.705231  108489 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.705359  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.705637  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.708672  108489 watch_cache.go:405] Replace watchCache (rev: 55088) 
I0920 06:49:11.708772  108489 watch_cache.go:405] Replace watchCache (rev: 55087) 
I0920 06:49:11.710226  108489 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 06:49:11.710248  108489 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 06:49:11.710299  108489 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 06:49:11.710357  108489 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 06:49:11.710498  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.710612  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.710634  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.714107  108489 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 06:49:11.714328  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.714393  108489 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 06:49:11.714510  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.714534  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.715216  108489 watch_cache.go:405] Replace watchCache (rev: 55090) 
I0920 06:49:11.715255  108489 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 06:49:11.715297  108489 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.715323  108489 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 06:49:11.715428  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.715448  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.716871  108489 watch_cache.go:405] Replace watchCache (rev: 55092) 
I0920 06:49:11.717485  108489 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 06:49:11.717535  108489 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.717741  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.717772  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.717875  108489 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 06:49:11.719065  108489 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 06:49:11.719235  108489 watch_cache.go:405] Replace watchCache (rev: 55093) 
I0920 06:49:11.719313  108489 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.719826  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.719874  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.719457  108489 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 06:49:11.719967  108489 watch_cache.go:405] Replace watchCache (rev: 55099) 
I0920 06:49:11.721658  108489 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 06:49:11.721683  108489 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 06:49:11.722071  108489 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.722232  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.722497  108489 watch_cache.go:405] Replace watchCache (rev: 55102) 
I0920 06:49:11.722730  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.723919  108489 watch_cache.go:405] Replace watchCache (rev: 55102) 
I0920 06:49:11.723982  108489 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 06:49:11.724011  108489 master.go:461] Enabling API group "storage.k8s.io".
I0920 06:49:11.724115  108489 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 06:49:11.724205  108489 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.724388  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.724419  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.725821  108489 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 06:49:11.725910  108489 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 06:49:11.726040  108489 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.726200  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.726228  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.727052  108489 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 06:49:11.727082  108489 watch_cache.go:405] Replace watchCache (rev: 55105) 
I0920 06:49:11.727231  108489 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.727272  108489 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 06:49:11.727384  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.727404  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.728595  108489 watch_cache.go:405] Replace watchCache (rev: 55111) 
I0920 06:49:11.729514  108489 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 06:49:11.729755  108489 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.730018  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.730046  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.730067  108489 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 06:49:11.730074  108489 watch_cache.go:405] Replace watchCache (rev: 55112) 
I0920 06:49:11.731164  108489 watch_cache.go:405] Replace watchCache (rev: 55113) 
I0920 06:49:11.734243  108489 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 06:49:11.734402  108489 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 06:49:11.734447  108489 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.734608  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.734631  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.735977  108489 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 06:49:11.736007  108489 master.go:461] Enabling API group "apps".
I0920 06:49:11.736022  108489 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 06:49:11.736094  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.736278  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.736299  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.739591  108489 watch_cache.go:405] Replace watchCache (rev: 55113) 
I0920 06:49:11.739832  108489 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 06:49:11.739920  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.740161  108489 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 06:49:11.740184  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.740306  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.740897  108489 watch_cache.go:405] Replace watchCache (rev: 55113) 
I0920 06:49:11.741088  108489 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 06:49:11.741150  108489 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.741185  108489 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 06:49:11.741281  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.741299  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.741767  108489 watch_cache.go:405] Replace watchCache (rev: 55115) 
I0920 06:49:11.741956  108489 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 06:49:11.741994  108489 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.742156  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.742179  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.742278  108489 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 06:49:11.742694  108489 watch_cache.go:405] Replace watchCache (rev: 55115) 
I0920 06:49:11.743747  108489 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 06:49:11.743775  108489 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 06:49:11.743793  108489 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 06:49:11.743815  108489 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.744116  108489 client.go:361] parsed scheme: "endpoint"
I0920 06:49:11.744138  108489 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 06:49:11.744696  108489 watch_cache.go:405] Replace watchCache (rev: 55115) 
I0920 06:49:11.745265  108489 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 06:49:11.745295  108489 master.go:461] Enabling API group "events.k8s.io".
I0920 06:49:11.745556  108489 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.745753  108489 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 06:49:11.745856  108489 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.746158  108489 watch_cache.go:405] Replace watchCache (rev: 55115) 
I0920 06:49:11.746189  108489 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.746330  108489 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.746439  108489 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.746551  108489 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.747217  108489 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.747434  108489 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.747559  108489 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.747676  108489 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.749040  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.749252  108489 watch_cache.go:405] Replace watchCache (rev: 55117) 
I0920 06:49:11.749427  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.750322  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.750629  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.751631  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.751976  108489 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.752900  108489 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.753289  108489 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.754084  108489 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.754315  108489 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:49:11.754372  108489 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 06:49:11.754960  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.755118  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.755431  108489 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.756382  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.757154  108489 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.758158  108489 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.758461  108489 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.759385  108489 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.760049  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.760240  108489 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.761039  108489 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 06:49:11.761111  108489 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 06:49:11.761878  108489 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.762121  108489 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.762610  108489 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 06:49:11.763211  108489 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e02def31-82e9-4378-822c-ad2f1dc064b6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersio