This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 8 failed / 2859 succeeded
Started2019-09-20 04:14
Elapsed32m25s
Revision
Buildergke-prow-ssd-pool-1a225945-d0kf
Refs master:db1f8da0
82703:a20bb8f6
podf2809e6f-db5c-11e9-a2c5-42201fa4e0be
infra-commit5a67b1fcf
podf2809e6f-db5c-11e9-a2c5-42201fa4e0be
repok8s.io/kubernetes
repo-commite1a97aa388e2042a87091da2211c317ba26090d2
repos{u'k8s.io/kubernetes': u'master:db1f8da036428636a710a9081a5fc18ba30c6ef0,82703:a20bb8f6df3eb04cb3c41ea76495f7b7e942a618'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 2m20s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I0920 04:43:41.330707  108295 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (140.14s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-043242.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds
W0920 04:44:51.420975  108295 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 04:44:51.421002  108295 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 04:44:51.421014  108295 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 04:44:51.421025  108295 master.go:259] Using reconciler: 
I0920 04:44:51.422429  108295 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.422613  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.422707  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.423266  108295 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 04:44:51.423292  108295 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.423361  108295 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 04:44:51.423525  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.423540  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.424314  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.424509  108295 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:44:51.424532  108295 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:44:51.424542  108295 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.424665  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.424696  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.425110  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.425409  108295 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 04:44:51.425475  108295 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 04:44:51.425448  108295 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.425577  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.425595  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.426210  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.426240  108295 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 04:44:51.426300  108295 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 04:44:51.426485  108295 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.426610  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.426635  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.426978  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.427169  108295 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 04:44:51.427211  108295 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 04:44:51.427385  108295 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.427556  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.427584  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.428203  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.428398  108295 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 04:44:51.428563  108295 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.428651  108295 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 04:44:51.428717  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.428739  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.429295  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.429607  108295 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 04:44:51.429705  108295 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 04:44:51.429791  108295 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.429933  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.429960  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.430442  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.430492  108295 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 04:44:51.430545  108295 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 04:44:51.430629  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.430892  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.430919  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.431428  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.431654  108295 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 04:44:51.431720  108295 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 04:44:51.432307  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.432162  108295 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.432680  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.432704  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.433255  108295 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 04:44:51.433314  108295 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 04:44:51.433442  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.433582  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.433612  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.434028  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.434127  108295 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 04:44:51.434185  108295 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 04:44:51.434573  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.434688  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.434713  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.435045  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.435203  108295 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 04:44:51.435238  108295 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 04:44:51.435395  108295 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.435534  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.435560  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.436044  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.436075  108295 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 04:44:51.436137  108295 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 04:44:51.436281  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.436409  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.436433  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.436829  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.437077  108295 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 04:44:51.437120  108295 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 04:44:51.437115  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.437357  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.437390  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.437895  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.437918  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.438073  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.438544  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.438676  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.438704  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.439336  108295 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 04:44:51.439363  108295 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 04:44:51.439384  108295 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 04:44:51.439966  108295 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.440062  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.440312  108295 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.441068  108295 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.441686  108295 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.442270  108295 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.442909  108295 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.443353  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.443541  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.443759  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.444341  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.444865  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.445110  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.445840  108295 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.446114  108295 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.446603  108295 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.446845  108295 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.447550  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.447845  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.448036  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.448204  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.448391  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.448545  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.448752  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.449503  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.449789  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.450492  108295 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.451149  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.451446  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.451794  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.452532  108295 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.452868  108295 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.453529  108295 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.454184  108295 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.454800  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.455672  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.455987  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.456153  108295 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 04:44:51.456235  108295 master.go:461] Enabling API group "authentication.k8s.io".
I0920 04:44:51.456295  108295 master.go:461] Enabling API group "authorization.k8s.io".
I0920 04:44:51.456507  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.456726  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.456806  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.457696  108295 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:44:51.457758  108295 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:44:51.457928  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.458049  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.458072  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.458685  108295 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:44:51.458717  108295 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:44:51.458845  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.458953  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.458965  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.459025  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.459986  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.460032  108295 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:44:51.460049  108295 master.go:461] Enabling API group "autoscaling".
I0920 04:44:51.460094  108295 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:44:51.460303  108295 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.460421  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.460445  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.460996  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.461053  108295 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 04:44:51.461111  108295 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 04:44:51.461219  108295 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.461379  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.461406  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.461997  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.462310  108295 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 04:44:51.462331  108295 master.go:461] Enabling API group "batch".
I0920 04:44:51.462481  108295 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 04:44:51.462534  108295 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.462653  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.462672  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.463155  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.463295  108295 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 04:44:51.463316  108295 master.go:461] Enabling API group "certificates.k8s.io".
I0920 04:44:51.463335  108295 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 04:44:51.463501  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.463602  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.463621  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.464413  108295 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:44:51.464416  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.464488  108295 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:44:51.464594  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.464729  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.464794  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.465833  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.467008  108295 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:44:51.467063  108295 master.go:461] Enabling API group "coordination.k8s.io".
I0920 04:44:51.467082  108295 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:44:51.467096  108295 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 04:44:51.467344  108295 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.467593  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.467626  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.468242  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.468385  108295 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:44:51.468405  108295 master.go:461] Enabling API group "extensions".
I0920 04:44:51.468556  108295 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:44:51.468662  108295 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.468814  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.468853  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.469574  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.469594  108295 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 04:44:51.469680  108295 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 04:44:51.469754  108295 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.469906  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.469928  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.470639  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.471055  108295 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:44:51.471086  108295 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:44:51.471223  108295 master.go:461] Enabling API group "networking.k8s.io".
I0920 04:44:51.471351  108295 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.471521  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.471543  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.472034  108295 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 04:44:51.472054  108295 master.go:461] Enabling API group "node.k8s.io".
I0920 04:44:51.472071  108295 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 04:44:51.472236  108295 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.472351  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.472364  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.472783  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.473072  108295 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 04:44:51.473136  108295 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 04:44:51.473252  108295 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.473378  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.473398  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.473761  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.474111  108295 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 04:44:51.474138  108295 master.go:461] Enabling API group "policy".
I0920 04:44:51.474168  108295 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.474197  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.474263  108295 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 04:44:51.474321  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.474337  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.475409  108295 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:44:51.475486  108295 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:44:51.475595  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.475593  108295 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.475701  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.475721  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.476605  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.476633  108295 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:44:51.476659  108295 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.476690  108295 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:44:51.476773  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.476788  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.477357  108295 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:44:51.477429  108295 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:44:51.477545  108295 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.477589  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.477657  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.477677  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.478494  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.478914  108295 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:44:51.478972  108295 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.479006  108295 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:44:51.479103  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.479123  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.479476  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.479709  108295 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:44:51.479728  108295 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:44:51.479909  108295 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.480049  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.480067  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.480431  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.480647  108295 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:44:51.480744  108295 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:44:51.480679  108295 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.480988  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.481005  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.481592  108295 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:44:51.481721  108295 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:44:51.481803  108295 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.481900  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.482099  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.482128  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.482610  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.482854  108295 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:44:51.483019  108295 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 04:44:51.483023  108295 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:44:51.483596  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.484695  108295 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.484791  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.484808  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.485541  108295 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:44:51.485594  108295 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:44:51.486417  108295 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.486654  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.486496  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.486737  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.487531  108295 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:44:51.487587  108295 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:44:51.487629  108295 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 04:44:51.488064  108295 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 04:44:51.488136  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.488404  108295 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.488574  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.488934  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.489606  108295 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:44:51.489647  108295 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:44:51.489778  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.489892  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.489915  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.490501  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.490521  108295 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:44:51.490550  108295 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.490597  108295 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:44:51.490642  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.490660  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.491394  108295 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 04:44:51.491419  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.491473  108295 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 04:44:51.491677  108295 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.492280  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.492524  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.492797  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.493560  108295 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 04:44:51.493601  108295 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 04:44:51.493732  108295 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.493878  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.493900  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.494279  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.494502  108295 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:44:51.494557  108295 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:44:51.494680  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.494780  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.494797  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.495180  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.495287  108295 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:44:51.495311  108295 master.go:461] Enabling API group "storage.k8s.io".
I0920 04:44:51.495374  108295 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:44:51.495521  108295 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.495629  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.495655  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.496135  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.496567  108295 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 04:44:51.496793  108295 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 04:44:51.496864  108295 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.496962  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.496993  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.497522  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.498227  108295 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 04:44:51.498305  108295 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 04:44:51.498835  108295 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.499020  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.500053  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.500088  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.500746  108295 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 04:44:51.500875  108295 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 04:44:51.500907  108295 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.500999  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.501020  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.501490  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.501611  108295 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 04:44:51.501665  108295 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 04:44:51.501724  108295 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.501801  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.501813  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.502305  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.502348  108295 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 04:44:51.502361  108295 master.go:461] Enabling API group "apps".
I0920 04:44:51.502398  108295 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.502487  108295 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 04:44:51.502541  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.502686  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.503175  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.503548  108295 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 04:44:51.503590  108295 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 04:44:51.503754  108295 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.503903  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.503920  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.504269  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.504981  108295 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 04:44:51.505038  108295 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.505075  108295 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 04:44:51.505271  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.505293  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.505822  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.506018  108295 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 04:44:51.506061  108295 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 04:44:51.506055  108295 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.506161  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.506183  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.506912  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.506927  108295 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 04:44:51.506943  108295 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 04:44:51.506965  108295 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.506978  108295 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 04:44:51.507239  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:51.507270  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:51.507642  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.507860  108295 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:44:51.507881  108295 master.go:461] Enabling API group "events.k8s.io".
I0920 04:44:51.507893  108295 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:44:51.508113  108295 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.508507  108295 watch_cache.go:405] Replace watchCache (rev: 59769) 
I0920 04:44:51.508604  108295 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.508890  108295 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.509004  108295 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.509135  108295 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.509253  108295 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.509519  108295 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.509678  108295 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.509800  108295 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.509938  108295 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.510677  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.510949  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.511682  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.511989  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.512858  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.513146  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.513843  108295 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.514138  108295 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.514768  108295 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.515049  108295 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:44:51.515125  108295 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 04:44:51.515788  108295 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.515953  108295 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.516193  108295 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.516887  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.517542  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.518196  108295 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.518490  108295 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.519262  108295 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.519884  108295 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.520123  108295 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.520696  108295 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:44:51.520763  108295 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 04:44:51.521384  108295 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.521683  108295 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.522161  108295 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.522812  108295 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.523291  108295 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.523892  108295 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.524422  108295 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.524972  108295 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.525406  108295 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.526060  108295 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.526621  108295 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:44:51.526689  108295 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0920 04:44:51.527164  108295 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.527678  108295 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:44:51.527738  108295 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0920 04:44:51.528235  108295 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.528790  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.529037  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.529543  108295 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.529960  108295 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.530469  108295 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.530983  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:44:51.531043  108295 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0920 04:44:51.531704  108295 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.532251  108295 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.532583  108295 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.533177  108295 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.533448  108295 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.533805  108295 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.534408  108295 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.534678  108295 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.534933  108295 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.535566  108295 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.535856  108295 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.536149  108295 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:44:51.536206  108295 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0920 04:44:51.536214  108295 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0920 04:44:51.536818  108295 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.537483  108295 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.538067  108295 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.538658  108295 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.539322  108295 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"4d70c1cd-a1c5-431b-9f2e-0f7371536e3a", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:51.542314  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.542347  108295 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0920 04:44:51.542354  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.542363  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.542371  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.542377  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.542416  108295 httplog.go:90] GET /healthz: (197.684µs) 0 [Go-http-client/1.1 127.0.0.1:33552]
I0920 04:44:51.543823  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.224662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33554]
I0920 04:44:51.547794  108295 httplog.go:90] GET /api/v1/services: (1.703316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33554]
I0920 04:44:51.551491  108295 httplog.go:90] GET /api/v1/services: (841.598µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33554]
I0920 04:44:51.554081  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.554177  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.554191  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.554199  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.554204  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.554235  108295 httplog.go:90] GET /healthz: (264.808µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33552]
I0920 04:44:51.555141  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (997.995µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33554]
I0920 04:44:51.555162  108295 httplog.go:90] GET /api/v1/services: (649.602µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33556]
I0920 04:44:51.556050  108295 httplog.go:90] GET /api/v1/services: (1.601617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:51.556969  108295 httplog.go:90] POST /api/v1/namespaces: (1.440862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33554]
I0920 04:44:51.558154  108295 httplog.go:90] GET /api/v1/namespaces/kube-public: (779.434µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:51.559973  108295 httplog.go:90] POST /api/v1/namespaces: (1.515457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:51.561102  108295 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (818.489µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:51.562803  108295 httplog.go:90] POST /api/v1/namespaces: (1.241143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:51.643617  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.643664  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.643678  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.643687  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.643695  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.643736  108295 httplog.go:90] GET /healthz: (343.738µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:51.655541  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.655602  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.655635  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.655646  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.655658  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.655710  108295 httplog.go:90] GET /healthz: (430.439µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:51.743562  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.743610  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.743621  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.743629  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.743636  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.743673  108295 httplog.go:90] GET /healthz: (338.194µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:51.755070  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.755107  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.755117  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.755124  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.755132  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.755164  108295 httplog.go:90] GET /healthz: (227.949µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:51.777765  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.777784  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.777814  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.777775  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.777925  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.777930  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.777949  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.843276  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.843351  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.843361  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.843368  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.843373  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.843404  108295 httplog.go:90] GET /healthz: (264.593µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:51.855021  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.855058  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.855073  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.855116  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.855125  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.855159  108295 httplog.go:90] GET /healthz: (278.046µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:51.943365  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.943421  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.943431  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.943438  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.943444  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.943501  108295 httplog.go:90] GET /healthz: (305.323µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:51.955274  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:51.955334  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:51.955349  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:51.955361  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:51.955369  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:51.955415  108295 httplog.go:90] GET /healthz: (327.318µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:51.981077  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.981077  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.981136  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.981509  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.983280  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:51.983338  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.008404  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.008496  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.008483  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.008780  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.008818  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.009307  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.043292  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:52.043380  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.043394  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.043404  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.043412  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.043443  108295 httplog.go:90] GET /healthz: (278.75µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:52.054976  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:52.055013  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.055025  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.055032  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.055037  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.055065  108295 httplog.go:90] GET /healthz: (207.077µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.143353  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:52.143398  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.143410  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.143419  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.143427  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.143491  108295 httplog.go:90] GET /healthz: (314.598µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:52.154969  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:52.155151  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.155196  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.155311  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.155433  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.155643  108295 httplog.go:90] GET /healthz: (796.937µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.185797  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.212940  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.243648  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:52.243696  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.243712  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.243719  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.243728  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.243774  108295 httplog.go:90] GET /healthz: (371.264µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:52.254972  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:52.255014  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.255027  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.255037  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.255046  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.255080  108295 httplog.go:90] GET /healthz: (246.555µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.343264  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:52.343297  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.343307  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.343313  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.343319  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.343355  108295 httplog.go:90] GET /healthz: (218.901µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:52.355045  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:44:52.355081  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.355091  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.355098  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.355104  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.355127  108295 httplog.go:90] GET /healthz: (229.192µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.420862  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:52.420966  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:52.444245  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.444498  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.444600  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.444707  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.444938  108295 httplog.go:90] GET /healthz: (1.824644ms) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:52.455656  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.455691  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.455702  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.455711  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.455755  108295 httplog.go:90] GET /healthz: (1.045683ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.543779  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.543809  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:44:52.543820  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:44:52.543829  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:44:52.543860  108295 httplog.go:90] GET /healthz: (663.319µs) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:52.543919  108295 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.631225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33552]
I0920 04:44:52.544001  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.70397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.546483  108295 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.998343ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.546888  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.948393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33564]
I0920 04:44:52.547386  108295 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.784595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33552]
I0920 04:44:52.547701  108295 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0920 04:44:52.548678  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.15154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33564]
I0920 04:44:52.548855  108295 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (956.665µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.550786  108295 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.575993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.550813  108295 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.571507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.550964  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.799947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33564]
I0920 04:44:52.551101  108295 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0920 04:44:52.551125  108295 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0920 04:44:52.552069  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (788.13µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.553306  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (763.316µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.554584  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (907.52µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.555603  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (684.986µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.555644  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.555753  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:52.555783  108295 httplog.go:90] GET /healthz: (822.029µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.557551  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (723.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.559209  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (695.601µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.560132  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (630.775µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.561699  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.203836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.561881  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0920 04:44:52.562750  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (705.726µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.564138  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.084783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.564416  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0920 04:44:52.565260  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (615.607µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.566548  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (985.725µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.566884  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0920 04:44:52.567688  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (664.044µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.569096  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.086225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.569291  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0920 04:44:52.570034  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (550.948µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.571340  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (980.189µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.571564  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0920 04:44:52.572337  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (581.36µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.573698  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.057054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.573958  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0920 04:44:52.575019  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (800.938µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.576493  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.14722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.576717  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0920 04:44:52.577539  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (633.98µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.578955  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.076603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.579221  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0920 04:44:52.579988  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (580.632µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.581984  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.624258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.582348  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0920 04:44:52.583136  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (585.648µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.584811  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.326923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.585068  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0920 04:44:52.585974  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (699.951µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.587542  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.141225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.587737  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0920 04:44:52.588435  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (568.237µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.590315  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.522837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.590568  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0920 04:44:52.591325  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (577.336µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.593329  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.673489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.593522  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0920 04:44:52.594209  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (530.04µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.595488  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (934.356µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.595716  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0920 04:44:52.596439  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (528.944µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.597844  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.057431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.598027  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0920 04:44:52.598780  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (619.852µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.600043  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (964.987µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.600225  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0920 04:44:52.600978  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (601.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.602275  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (982.334µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.602476  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0920 04:44:52.603297  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (671.754µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.605308  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.677037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.605558  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 04:44:52.606311  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (594.482µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.607606  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (943.532µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.607762  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0920 04:44:52.608501  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (588.114µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.609800  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (948.441µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.609943  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0920 04:44:52.610755  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (678.759µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.612072  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.018298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.612257  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0920 04:44:52.613049  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (621.543µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.614434  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.078306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.614657  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0920 04:44:52.615337  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (540.152µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.616868  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.077602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.617054  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0920 04:44:52.617848  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (631.428µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.619040  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (924.975µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.619216  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0920 04:44:52.620100  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (714.215µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.621675  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.259374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.621953  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0920 04:44:52.623177  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (918.092µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.628073  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.31358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.628315  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0920 04:44:52.629778  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.231668ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.632376  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.9312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.632709  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0920 04:44:52.634164  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.048057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.636049  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.636411  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 04:44:52.637389  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (760.189µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.639200  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.389434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.639386  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 04:44:52.640285  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (738.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.642064  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.642251  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 04:44:52.643090  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (687.189µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.643853  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.643877  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:52.643906  108295 httplog.go:90] GET /healthz: (846.531µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:52.645353  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.943887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.646816  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 04:44:52.648964  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.604551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.654057  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.549167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.654577  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 04:44:52.655760  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.655788  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:52.655869  108295 httplog.go:90] GET /healthz: (968.656µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.656096  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.186434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.658541  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.888498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.658912  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 04:44:52.659956  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (806.834µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.661856  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.477616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.662179  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 04:44:52.663368  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (989.831µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.665675  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.691846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.666053  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 04:44:52.667411  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.15321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.669273  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.374927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.669535  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 04:44:52.670885  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.060565ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.673487  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.87929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.673989  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 04:44:52.675587  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.000701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.678082  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.761872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.678623  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0920 04:44:52.680405  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.366431ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.683023  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.002027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.683296  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 04:44:52.684589  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (980.06µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.686984  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.832709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.687290  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0920 04:44:52.688383  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (857.693µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.690777  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.9552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.691146  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 04:44:52.692318  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (950.392µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.694084  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.294622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.694381  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 04:44:52.695567  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (757.645µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.698102  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.93641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.698395  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 04:44:52.699600  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (917.61µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.701407  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.339242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.701744  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 04:44:52.703490  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.414336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.706158  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.226526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.706497  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 04:44:52.707546  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (848.92µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.709651  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.545125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.709868  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0920 04:44:52.711234  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.187818ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.713081  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.272234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.713371  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 04:44:52.714344  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (785.879µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.715835  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.11292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.716006  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0920 04:44:52.716839  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (685.252µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.718403  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.2382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.718604  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 04:44:52.719368  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (594.49µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.723483  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.142408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.723675  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 04:44:52.743410  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (975.095µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:52.743590  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.743617  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:52.743640  108295 httplog.go:90] GET /healthz: (601.737µs) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:52.755614  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.755642  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:52.755671  108295 httplog.go:90] GET /healthz: (856.071µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.764928  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.394173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.765184  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 04:44:52.777951  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.777986  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.778077  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.778115  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.778149  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.778021  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.778032  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.783941  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.346197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.804303  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.864129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.804558  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 04:44:52.824084  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.420743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.844088  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.844125  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:52.844166  108295 httplog.go:90] GET /healthz: (1.049791ms) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:52.844377  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.922743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.844621  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 04:44:52.855418  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.855474  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:52.855521  108295 httplog.go:90] GET /healthz: (775.424µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.863334  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (925.235µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.885074  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.540309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.885426  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0920 04:44:52.903926  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.349262ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.925198  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.62011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.925641  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0920 04:44:52.954779  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.954840  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:52.954932  108295 httplog.go:90] GET /healthz: (11.734771ms) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:52.955536  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (13.008744ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.956583  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:52.956619  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:52.956657  108295 httplog.go:90] GET /healthz: (1.140258ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33566]
I0920 04:44:52.965345  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.88868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:52.965623  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0920 04:44:52.981302  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.981351  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.981352  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.981663  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.983478  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.983558  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:52.983655  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.160758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.008628  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.014263  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.014279  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.014271  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (11.707121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.014293  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.014555  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0920 04:44:53.015005  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.015057  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.025416  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.553308ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.044337  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.044377  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.044414  108295 httplog.go:90] GET /healthz: (1.295369ms) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:53.045746  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.011972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.046114  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0920 04:44:53.055883  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.055918  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.055958  108295 httplog.go:90] GET /healthz: (1.135475ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.063782  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.311816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.084992  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.398913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.085362  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 04:44:53.103714  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.281518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.125271  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.612849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.125653  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0920 04:44:53.143698  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.143742  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.143752  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.313227ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.143771  108295 httplog.go:90] GET /healthz: (716.345µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:53.155548  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.155573  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.155615  108295 httplog.go:90] GET /healthz: (949.461µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.163957  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.64001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.164144  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0920 04:44:53.183423  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (930.277µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.185984  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.204565  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.920405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.205112  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0920 04:44:53.213555  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.223994  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.471696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.243819  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.244039  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.244260  108295 httplog.go:90] GET /healthz: (1.228746ms) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:53.244691  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.302076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.244884  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0920 04:44:53.255426  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.255463  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.255526  108295 httplog.go:90] GET /healthz: (795.817µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.263333  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (924.257µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.284738  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.171197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.284985  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 04:44:53.303830  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.319624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.324640  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.064426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.325033  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 04:44:53.343730  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.167645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.343828  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.344066  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.344220  108295 httplog.go:90] GET /healthz: (1.180174ms) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:53.355524  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.355555  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.355593  108295 httplog.go:90] GET /healthz: (895.788µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.363983  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.632981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.364141  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 04:44:53.385309  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (2.768447ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.404696  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119795ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.405108  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 04:44:53.423819  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.247817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.445363  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.445619  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.445789  108295 httplog.go:90] GET /healthz: (2.690791ms) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:53.445899  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.200129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.446587  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 04:44:53.456099  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.456134  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.456184  108295 httplog.go:90] GET /healthz: (1.244721ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.463760  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.218919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.484467  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.960265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.484738  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 04:44:53.503708  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.210104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.524071  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.714019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.524268  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 04:44:53.543630  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.198844ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.543635  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.543727  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.543826  108295 httplog.go:90] GET /healthz: (789.68µs) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:53.575301  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.575325  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.575367  108295 httplog.go:90] GET /healthz: (1.114505ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.575694  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.068217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.575908  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 04:44:53.583503  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.110945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.604068  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.663013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.604296  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 04:44:53.623320  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (880.99µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.643570  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.643595  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.643619  108295 httplog.go:90] GET /healthz: (637.626µs) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:53.644192  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.521708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.644419  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 04:44:53.655488  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.655520  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.655559  108295 httplog.go:90] GET /healthz: (806.537µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.663778  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.277549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.684683  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.086905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.684949  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0920 04:44:53.703427  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (984.609µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.724620  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.173148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.724840  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 04:44:53.743397  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (971.923µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.743759  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.743783  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.743827  108295 httplog.go:90] GET /healthz: (764.232µs) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:53.755477  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.755507  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.755546  108295 httplog.go:90] GET /healthz: (748.24µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.764357  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.801735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.764768  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0920 04:44:53.778173  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.778184  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.778309  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.778345  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.778379  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.778388  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.778404  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.783588  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.125066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.804419  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.791419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.804701  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 04:44:53.824799  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (2.153515ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.845063  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.516674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.845388  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.845564  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.846049  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 04:44:53.848369  108295 httplog.go:90] GET /healthz: (5.240107ms) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:53.855983  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.856384  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.856565  108295 httplog.go:90] GET /healthz: (1.630827ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.864023  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.375191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.887011  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.415276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.887277  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 04:44:53.904195  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.525894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.924302  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.775081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.924603  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 04:44:53.944064  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.944095  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.944116  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.50475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:53.949894  108295 httplog.go:90] GET /healthz: (6.797635ms) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:53.956147  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:53.956189  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:53.956246  108295 httplog.go:90] GET /healthz: (1.405418ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.964381  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.859567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.964639  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 04:44:53.981561  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.981567  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.981611  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.981920  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.983511  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.079959ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:53.983637  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:53.983715  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.003810  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.429585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.004063  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0920 04:44:54.008801  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.014446  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.014492  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.014536  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.015190  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.015219  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.023685  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.235982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.044104  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.044140  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.044176  108295 httplog.go:90] GET /healthz: (1.143227ms) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:54.044348  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.836277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.044596  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 04:44:54.055677  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.055709  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.055766  108295 httplog.go:90] GET /healthz: (1.019861ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.063326  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (973.454µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.083960  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.45031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.084195  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0920 04:44:54.103787  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.206964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.124291  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.800508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.124604  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 04:44:54.143691  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.255988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.143723  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.143747  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.143813  108295 httplog.go:90] GET /healthz: (787.137µs) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:54.155803  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.155834  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.155887  108295 httplog.go:90] GET /healthz: (1.035732ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.166081  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.01394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.166647  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 04:44:54.183823  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.240542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.186165  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.204611  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.088193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.205000  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 04:44:54.213919  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.223942  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.178583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.244056  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.244089  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.244124  108295 httplog.go:90] GET /healthz: (1.059867ms) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:54.244573  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.160038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.245036  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 04:44:54.255988  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.256181  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.256443  108295 httplog.go:90] GET /healthz: (1.424955ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.263689  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.252631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.284327  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.88167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.285036  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 04:44:54.304195  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.668117ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.306164  108295 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.437668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.324788  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.25426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.326488  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 04:44:54.343704  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.107281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.347778  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.347824  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.347886  108295 httplog.go:90] GET /healthz: (4.56214ms) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:54.350201  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.76501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.355816  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.355863  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.355933  108295 httplog.go:90] GET /healthz: (1.036985ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.364695  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.178774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.364969  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0920 04:44:54.383877  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.365878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.386377  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.963808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.404965  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.248983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.405331  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 04:44:54.423943  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.363179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.425530  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.10403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.444315  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.444343  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.444384  108295 httplog.go:90] GET /healthz: (1.339204ms) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:54.444818  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.302772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.445058  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 04:44:54.455415  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.455438  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.455484  108295 httplog.go:90] GET /healthz: (726.375µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.463278  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (945.435µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.464633  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (984.62µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.484096  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.645509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.484350  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 04:44:54.503290  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (903.039µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.504516  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (846.705µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.523572  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.204039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.523760  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 04:44:54.543008  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (676.837µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.543525  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.543546  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.543584  108295 httplog.go:90] GET /healthz: (605.844µs) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:54.544265  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (877.832µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.555562  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.555600  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.555635  108295 httplog.go:90] GET /healthz: (880.238µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.564754  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.325163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.565300  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 04:44:54.583829  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.370996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.585427  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.192045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.606510  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.926484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.607430  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0920 04:44:54.623802  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.209358ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.625447  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.073097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.644284  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.644320  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.644362  108295 httplog.go:90] GET /healthz: (747.808µs) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:54.644603  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.983449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.644871  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 04:44:54.655769  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.655803  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.655844  108295 httplog.go:90] GET /healthz: (1.027718ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.663859  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.315083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.665321  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.043884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.686627  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.01588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.687073  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 04:44:54.703791  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.276144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.705260  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.049163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.723836  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.413465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.724047  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 04:44:54.743948  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.743990  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.744018  108295 httplog.go:90] GET /healthz: (966.624µs) 0 [Go-http-client/1.1 127.0.0.1:33562]
I0920 04:44:54.743958  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.429528ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:54.745562  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.02789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.755869  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.755902  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.755948  108295 httplog.go:90] GET /healthz: (1.05655ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.764011  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.561397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.764275  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 04:44:54.778364  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.778367  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.778424  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.778495  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.778537  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.778554  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.778640  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.783308  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (885.444µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.784659  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (969.815µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.795538  108295 httplog.go:90] GET /api/v1/namespaces/default: (956.593µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:44:54.796754  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (922.259µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:44:54.797884  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (713.549µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:44:54.803604  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.209215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.803896  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 04:44:54.823265  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (871.289µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.824688  108295 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.020283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.843899  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:44:54.843937  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:44:54.843948  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.569763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.843975  108295 httplog.go:90] GET /healthz: (1.004936ms) 0 [Go-http-client/1.1 127.0.0.1:33558]
I0920 04:44:54.844258  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 04:44:54.855322  108295 httplog.go:90] GET /healthz: (647.376µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.856543  108295 httplog.go:90] GET /api/v1/namespaces/default: (865.439µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.858061  108295 httplog.go:90] POST /api/v1/namespaces: (1.222938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.859048  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (667.192µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.861478  108295 httplog.go:90] POST /api/v1/namespaces/default/services: (2.13197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.862594  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (793.422µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.864067  108295 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.174896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.944491  108295 httplog.go:90] GET /healthz: (1.255881ms) 200 [Go-http-client/1.1 127.0.0.1:33562]
W0920 04:44:54.945353  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945393  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945424  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945433  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945474  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945555  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945591  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945624  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945661  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945693  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945724  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:54.945791  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:44:54.945839  108295 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0920 04:44:54.945883  108295 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 04:44:54.946048  108295 shared_informer.go:197] Waiting for caches to sync for scheduler
I0920 04:44:54.946216  108295 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 04:44:54.946240  108295 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 04:44:54.947096  108295 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (608.655µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33562]
I0920 04:44:54.947959  108295 get.go:251] Starting watch for /api/v1/pods, rv=59769 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m0s
I0920 04:44:54.981834  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.981853  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.981859  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.982119  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.983883  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:54.983888  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.008979  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.014710  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.014716  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.014716  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.015335  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.015348  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.046275  108295 shared_informer.go:227] caches populated
I0920 04:44:55.046313  108295 shared_informer.go:204] Caches are synced for scheduler 
I0920 04:44:55.046727  108295 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046755  108295 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046751  108295 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046768  108295 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046774  108295 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046784  108295 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046828  108295 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046842  108295 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046859  108295 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046865  108295 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046904  108295 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046731  108295 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046921  108295 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.046926  108295 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.047099  108295 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.047129  108295 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.047692  108295 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (620.557µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33558]
I0920 04:44:55.047991  108295 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (486.213µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33580]
I0920 04:44:55.048123  108295 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (366.918µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33574]
I0920 04:44:55.048128  108295 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (365.917µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33588]
I0920 04:44:55.048111  108295 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (333.918µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33576]
I0920 04:44:55.048317  108295 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59769 labels= fields= timeout=8m18s
I0920 04:44:55.048583  108295 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (391.837µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33582]
I0920 04:44:55.048604  108295 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (416.98µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33586]
I0920 04:44:55.048791  108295 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59769 labels= fields= timeout=8m9s
I0920 04:44:55.048842  108295 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.048863  108295 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.048936  108295 get.go:251] Starting watch for /api/v1/nodes, rv=59769 labels= fields= timeout=9m53s
I0920 04:44:55.049198  108295 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59769 labels= fields= timeout=8m50s
I0920 04:44:55.049321  108295 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59769 labels= fields= timeout=7m45s
I0920 04:44:55.049210  108295 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.049429  108295 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.049584  108295 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (482.401µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33576]
I0920 04:44:55.049621  108295 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59769 labels= fields= timeout=6m44s
I0920 04:44:55.049845  108295 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (2.031132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33584]
I0920 04:44:55.050109  108295 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59769 labels= fields= timeout=9m34s
I0920 04:44:55.050134  108295 get.go:251] Starting watch for /api/v1/services, rv=59883 labels= fields= timeout=8m13s
I0920 04:44:55.050118  108295 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (333.797µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33594]
I0920 04:44:55.050535  108295 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59769 labels= fields= timeout=6m30s
I0920 04:44:55.050722  108295 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59769 labels= fields= timeout=7m47s
I0920 04:44:55.146635  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146789  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146815  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146835  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146856  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146876  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146900  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146920  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146941  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146971  108295 shared_informer.go:227] caches populated
I0920 04:44:55.146999  108295 shared_informer.go:227] caches populated
I0920 04:44:55.147101  108295 node_lifecycle_controller.go:327] Sending events to api server.
I0920 04:44:55.147230  108295 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0920 04:44:55.147316  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:44:55.147412  108295 taint_manager.go:162] Sending events to api server.
I0920 04:44:55.147604  108295 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 04:44:55.147725  108295 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0920 04:44:55.147804  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:44:55.147864  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:44:55.148018  108295 node_lifecycle_controller.go:488] Starting node controller
I0920 04:44:55.148059  108295 shared_informer.go:197] Waiting for caches to sync for taint
I0920 04:44:55.150365  108295 httplog.go:90] POST /api/v1/namespaces: (1.79678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33596]
I0920 04:44:55.150622  108295 node_lifecycle_controller.go:327] Sending events to api server.
I0920 04:44:55.150674  108295 node_lifecycle_controller.go:359] Controller is using taint based evictions.
I0920 04:44:55.150719  108295 taint_manager.go:162] Sending events to api server.
I0920 04:44:55.150762  108295 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 04:44:55.150799  108295 node_lifecycle_controller.go:465] Controller will taint node by condition.
I0920 04:44:55.150835  108295 node_lifecycle_controller.go:488] Starting node controller
I0920 04:44:55.150858  108295 shared_informer.go:197] Waiting for caches to sync for taint
I0920 04:44:55.151005  108295 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.151024  108295 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.151676  108295 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (464.201µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33596]
I0920 04:44:55.152338  108295 get.go:251] Starting watch for /api/v1/namespaces, rv=59885 labels= fields= timeout=9m59s
I0920 04:44:55.186365  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.214161  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.251028  108295 shared_informer.go:227] caches populated
I0920 04:44:55.251112  108295 shared_informer.go:227] caches populated
I0920 04:44:55.251120  108295 shared_informer.go:227] caches populated
I0920 04:44:55.251429  108295 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.251610  108295 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.251438  108295 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.251821  108295 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.251441  108295 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.251936  108295 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0920 04:44:55.253398  108295 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (529.669µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33602]
I0920 04:44:55.253418  108295 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (681.001µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33598]
I0920 04:44:55.254019  108295 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (1.051132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33600]
I0920 04:44:55.255102  108295 get.go:251] Starting watch for /api/v1/pods, rv=59769 labels= fields= timeout=6m19s
I0920 04:44:55.255199  108295 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59769 labels= fields= timeout=8m12s
I0920 04:44:55.256000  108295 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59769 labels= fields= timeout=6m36s
I0920 04:44:55.318020  108295 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0920 04:44:55.318059  108295 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0920 04:44:55.318083  108295 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0920 04:44:55.318087  108295 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0920 04:44:55.318097  108295 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0920 04:44:55.318101  108295 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0920 04:44:55.318209  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"97006d94-cc3e-42f4-93b3-561186831257", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0920 04:44:55.318249  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"8c5245c8-f094-4159-9e71-db2cb6091131", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0920 04:44:55.318263  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"1da76ef8-1bf3-4227-9766-39794d404f78", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0920 04:44:55.320563  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.899581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:44:55.322552  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.566907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:44:55.324777  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.647022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:44:55.325135  108295 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0920 04:44:55.325152  108295 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0920 04:44:55.325175  108295 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-2
I0920 04:44:55.325179  108295 controller_utils.go:168] Recording Removing Node node-2 from Controller event message for node node-2
I0920 04:44:55.325186  108295 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-0
I0920 04:44:55.325190  108295 controller_utils.go:168] Recording Removing Node node-0 from Controller event message for node node-0
I0920 04:44:55.325284  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"97006d94-cc3e-42f4-93b3-561186831257", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I0920 04:44:55.325328  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"8c5245c8-f094-4159-9e71-db2cb6091131", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I0920 04:44:55.325344  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"1da76ef8-1bf3-4227-9766-39794d404f78", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0920 04:44:55.326817  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.366986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:44:55.328543  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.335641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:44:55.330420  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.40328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:44:55.348278  108295 shared_informer.go:227] caches populated
I0920 04:44:55.348311  108295 shared_informer.go:204] Caches are synced for taint 
I0920 04:44:55.348407  108295 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 04:44:55.351410  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351468  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351477  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351481  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351487  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351494  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351500  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351508  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351512  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351517  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351523  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351531  108295 shared_informer.go:227] caches populated
I0920 04:44:55.351547  108295 shared_informer.go:204] Caches are synced for taint 
I0920 04:44:55.351609  108295 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 04:44:55.355710  108295 httplog.go:90] POST /api/v1/nodes: (2.315698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33606]
I0920 04:44:55.356290  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:44:55.356442  108295 taint_manager.go:438] Updating known taints on node node-0: []
I0920 04:44:55.356298  108295 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0920 04:44:55.356355  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:44:55.356766  108295 taint_manager.go:438] Updating known taints on node node-0: []
I0920 04:44:55.357332  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (605.129µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33610]
I0920 04:44:55.357534  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (824.59µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33608]
I0920 04:44:55.359092  108295 httplog.go:90] POST /api/v1/nodes: (2.757623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33606]
I0920 04:44:55.359255  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:44:55.359279  108295 taint_manager.go:438] Updating known taints on node node-1: []
I0920 04:44:55.359315  108295 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0920 04:44:55.359695  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:44:55.359714  108295 taint_manager.go:438] Updating known taints on node node-1: []
I0920 04:44:55.360044  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-0 failed because of a conflict, going to retry
I0920 04:44:55.360340  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.137186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33608]
I0920 04:44:55.360578  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (300.64µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.360588  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.356332054 +0000 UTC m=+356.149373304,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.356332291 +0000 UTC m=+356.149373527,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.356332755 +0000 UTC m=+356.149373984,}] Taint to Node node-0
I0920 04:44:55.360637  108295 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 04:44:55.360728  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (450.86µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33614]
I0920 04:44:55.361924  108295 httplog.go:90] POST /api/v1/nodes: (1.465241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33606]
I0920 04:44:55.362594  108295 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0920 04:44:55.362665  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:44:55.362796  108295 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:44:55.362865  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:44:55.362972  108295 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:44:55.363388  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (411.66µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.363526  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (4.837499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33610]
I0920 04:44:55.363601  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-1 failed because of a conflict, going to retry
I0920 04:44:55.363740  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.356291885 +0000 UTC m=+356.149333144,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.356292114 +0000 UTC m=+356.149333353,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.356292315 +0000 UTC m=+356.149333542,}] Taint to Node node-0
I0920 04:44:55.363810  108295 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 04:44:55.363879  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.347876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33608]
I0920 04:44:55.364219  108295 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods: (1.857932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33606]
I0920 04:44:55.364428  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (374.234µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33610]
I0920 04:44:55.364513  108295 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2"}
I0920 04:44:55.364524  108295 scheduling_queue.go:830] About to try and schedule pod taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:44:55.364540  108295 scheduler.go:530] Attempting to schedule pod: taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:44:55.364537  108295 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2"}
I0920 04:44:55.364725  108295 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2", node "node-2"
I0920 04:44:55.364741  108295 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2", node "node-2": all PVCs bound and nothing to do
I0920 04:44:55.364791  108295 factory.go:606] Attempting to bind testpod-2 to node-2
I0920 04:44:55.364930  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.359244247 +0000 UTC m=+356.152285498,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.359244439 +0000 UTC m=+356.152285665,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.359244701 +0000 UTC m=+356.152285941,}] Taint to Node node-1
I0920 04:44:55.364970  108295 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 04:44:55.365008  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.564029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.365281  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.359327639 +0000 UTC m=+356.152368892,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.359327897 +0000 UTC m=+356.152369134,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.359328069 +0000 UTC m=+356.152369300,}] Taint to Node node-1
I0920 04:44:55.365328  108295 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 04:44:55.366300  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.722754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33618]
I0920 04:44:55.366326  108295 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2/binding: (1.32498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33610]
I0920 04:44:55.366440  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-2 failed because of a conflict, going to retry
I0920 04:44:55.366500  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.362591768 +0000 UTC m=+356.155633021,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.36259204 +0000 UTC m=+356.155633269,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.362592223 +0000 UTC m=+356.155633461,}] Taint to Node node-2
I0920 04:44:55.366529  108295 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 04:44:55.366616  108295 scheduler.go:662] pod taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 is bound successfully on node "node-2", 3 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0920 04:44:55.366856  108295 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2"}
I0920 04:44:55.366856  108295 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2"}
I0920 04:44:55.367141  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.928476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.367356  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.362611575 +0000 UTC m=+356.155652829,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.362611867 +0000 UTC m=+356.155653098,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.36261203 +0000 UTC m=+356.155653272,}] Taint to Node node-2
I0920 04:44:55.367424  108295 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 04:44:55.368107  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (872.473µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:44:55.368191  108295 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/events: (1.323792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33618]
I0920 04:44:55.369200  108295 httplog.go:90] GET /api/v1/namespaces/kube-public: (812.6µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:44:55.370393  108295 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (762.182µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:44:55.466385  108295 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2: (1.484858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.467735  108295 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2: (927.975µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.468971  108295 httplog.go:90] GET /api/v1/nodes/node-2: (890.741µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.471324  108295 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.814697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.472153  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (397.422µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.473395  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (331.139µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.475580  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.540818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.475844  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.471553849 +0000 UTC m=+356.264595095,}] Taint to Node node-2
I0920 04:44:55.476259  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-2 failed because of a conflict, going to retry
I0920 04:44:55.476391  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (368.49µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.477297  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.870092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.477561  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55.472923844 +0000 UTC m=+356.265965095,}] Taint to Node node-2
I0920 04:44:55.478119  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (362.406µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.478733  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.666135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.478941  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
I0920 04:44:55.479005  108295 controller_utils.go:204] Added [] Taint to Node node-2
I0920 04:44:55.479516  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (366.725µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:44:55.479742  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
I0920 04:44:55.480390  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.680467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.480606  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
I0920 04:44:55.573831  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.778724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.673477  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.509636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.773729  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.736814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.778538  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.778560  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.778584  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.778665  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.778684  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.778544  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.778805  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.873658  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.699963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.973262  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.290152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:55.982072  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.982106  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.982124  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.982326  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.983980  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:55.984140  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.009169  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.014864  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.014893  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.014898  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.015492  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.015521  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.048204  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.048620  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.048633  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.049242  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.049391  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.050605  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.076067  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.02471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.173919  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.881121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.186567  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.214390  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.254886  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.273653  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.601612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.376319  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.380256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.473597  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.580056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.573711  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.683176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.675583  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.390997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.773681  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.594369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.778723  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.778738  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.778762  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.778766  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.778782  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.778997  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.779087  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.873598  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.657215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.973395  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.373198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:56.982258  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.982277  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.982258  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.982403  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.984201  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:56.984313  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.009372  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.014987  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.015027  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.015051  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.015684  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.015693  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.048352  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.048776  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.048791  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.049406  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.049553  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.050754  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.073196  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.223074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.173422  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.412713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.186749  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.214640  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.255630  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.283414  108295 httplog.go:90] GET /api/v1/nodes/node-2: (7.430052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.375473  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.58139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.473889  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.844597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.574788  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.555259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.674987  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.824355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.775906  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.768798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.778941  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.778940  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.778962  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.778993  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.779018  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.779262  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.779362  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.874521  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.09279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.973644  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.609253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:57.982446  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.982538  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.982512  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.983616  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.984434  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:57.984506  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.009610  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.015178  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.015178  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.015178  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.015910  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.016341  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.048616  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.048972  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.049132  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.049596  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.049822  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.051721  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.073257  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.24952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.174526  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.188019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.187029  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.214824  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.256145  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.273786  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.725311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.375994  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.700704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.473869  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.840447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.574856  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.4597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.654418  108295 httplog.go:90] GET /api/v1/namespaces/default: (2.31965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:44:58.656991  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.728594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:44:58.659273  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.545506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:44:58.674542  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.413251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.775210  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.767149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.779347  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.779384  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.779384  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.779384  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.779422  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.779511  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.779677  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.873842  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.843798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.974029  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.976267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:58.982693  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.982723  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.982694  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.983764  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.984641  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:58.984658  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.009953  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.015350  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.015350  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.015560  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.016131  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.016489  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.048984  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.049313  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.049357  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.050117  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.050124  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.052125  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.073713  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.711828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.174100  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.935489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.187211  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.215014  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.256361  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.273787  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.712009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.374640  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.386545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.473778  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.756149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.573775  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.743166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.673728  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.718509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.773768  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.644386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.779623  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.779623  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.779637  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.779660  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.779669  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.779790  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.779949  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.873655  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.691962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.877870  108295 httplog.go:90] GET /api/v1/namespaces/default: (954.068µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:44:59.879268  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (889.958µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:44:59.880348  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (718.347µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:44:59.974052  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.935303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:44:59.982837  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.982842  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.982893  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.983961  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.984843  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:59.984855  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.010173  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.015687  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.015797  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.015803  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.016350  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.016665  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.049148  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.049474  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.049477  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.050376  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.050534  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.052300  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.073568  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.612041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:45:00.173859  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.814307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:45:00.187407  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.215217  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.256520  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.274574  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.265552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:45:00.348905  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 04:45:00.349245  108295 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 04:45:00.349658  108295 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 04:45:00.349940  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 04:45:00.350253  108295 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
I0920 04:45:00.350308  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"dea41bc7-c172-4481-810f-f2a9e1d9d6c7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 04:45:00.351738  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"44b716df-65f7-49a4-9e39-1b01b34a900d", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 04:45:00.351520  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 04:45:00.351775  108295 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
W0920 04:45:00.351861  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0920 04:45:00.351912  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 04:45:00.351939  108295 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 04:45:00.351919972 +0000 UTC m=+361.144961219. Adding it to the Taint queue.
W0920 04:45:00.351973  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
I0920 04:45:00.352126  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 04:45:00.352247  108295 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 04:45:00.352366  108295 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 04:45:00.352417  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 04:45:00.352481  108295 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 04:45:00.352527  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 04:45:00.352555  108295 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0920 04:45:00.352667  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 04:45:00.352754  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0920 04:45:00.352806  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 04:45:00.352859  108295 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 04:45:00.352843559 +0000 UTC m=+361.145884802. Adding it to the Taint queue.
I0920 04:45:00.352150  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"afd51821-fdad-4c6a-b4b2-178b674e2761", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 04:45:00.352265  108295 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 04:45:00.352589  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"afd51821-fdad-4c6a-b4b2-178b674e2761", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 04:45:00.352930  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"44b716df-65f7-49a4-9e39-1b01b34a900d", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 04:45:00.352903  108295 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 04:45:00.353045  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"dea41bc7-c172-4481-810f-f2a9e1d9d6c7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 04:45:00.355285  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (4.920059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:45:00.355663  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.568596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.357076  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.380115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:45:00.357437  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.264458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.359078  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (363.537µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33620]
I0920 04:45:00.359083  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.581004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:45:00.359631  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.928389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.360638  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (498.999µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.362344  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.588049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:45:00.362782  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:45:00.358308503 +0000 UTC m=+361.151349804,}] Taint to Node node-2
I0920 04:45:00.362811  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-2 failed because of a conflict, going to retry
I0920 04:45:00.362825  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:45:00.362945  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:45:00.362969  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:45:00 +0000 UTC}]
I0920 04:45:00.363073  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:00.363059521 +0000 UTC m=+361.156100782 to be fired at 2019-09-20 04:45:00.363059521 +0000 UTC m=+361.156100782
I0920 04:45:00.363086  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:45:00.363115  108295 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:45:00.363104  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:45:00 +0000 UTC}]
I0920 04:45:00.363171  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:00.363158487 +0000 UTC m=+361.156199933 to be fired at 2019-09-20 04:45:00.363158487 +0000 UTC m=+361.156199933
I0920 04:45:00.363262  108295 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:45:00.363364  108295 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:45:00.363492  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.241652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.363564  108295 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:45:00.363990  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:45:00.359952272 +0000 UTC m=+361.152993499,}] Taint to Node node-2
I0920 04:45:00.364029  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:45:00.365676  108295 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/events: (1.91736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.365825  108295 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/events: (1.592309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33622]
I0920 04:45:00.366864  108295 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2: (3.184205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33612]
I0920 04:45:00.368615  108295 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2: (1.542663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33620]
I0920 04:45:00.373406  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.348565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.475866  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.412974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.574431  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.158399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.673592  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.548764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.773343  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.355581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.779933  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.779940  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.779982  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.780169  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.780306  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.780306  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.780496  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.874631  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.011371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.973642  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.565971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:00.983025  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.983046  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.983051  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.984122  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.985072  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:00.986007  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.010415  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.015919  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.015922  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.016059  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.016527  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.016837  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.049246  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.049650  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.050406  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.050757  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.050883  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.052732  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.073687  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.673418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.176837  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.602436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.187799  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.215569  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.256809  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.273341  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.383504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.373430  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.427276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.473398  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.432294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.573499  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.501013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.673350  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.394408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.773784  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.775223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.780158  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.780158  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.780500  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.780186  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.780483  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.780637  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.780487  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.873514  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.51741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.973984  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.832377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:01.983148  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.983205  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.983216  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.984287  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.985267  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:01.986213  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.010668  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.016143  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.016163  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.016229  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.016741  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.017023  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.049555  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.049859  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.050756  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.050919  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.051014  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.053165  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.073674  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.702223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.173312  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.307181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.187887  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.215811  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.257165  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.273432  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.439325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.374142  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.166238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.473087  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.169099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.573272  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.328861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.673277  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.34068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.773194  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.234223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.780517  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.780654  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.780756  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.780757  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.780772  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.780799  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.780943  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.874085  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.047682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.973656  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.638583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:02.983287  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.983314  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.983488  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.984435  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.985484  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:02.986400  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.010767  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.016395  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.016424  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.016429  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.016963  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.017327  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.049691  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.050066  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.051044  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.051067  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.051159  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.053299  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.073303  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.384838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.173519  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.511449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.188098  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.216006  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.257408  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.273772  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.766481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.374026  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.883175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.473868  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.834003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.573772  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.691984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.673446  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.487288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.773073  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.153606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.780660  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.780788  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.780948  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.781087  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.781088  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.781200  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.781297  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.873376  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.44145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.973672  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.542513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:03.983444  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.983444  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.983667  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.984672  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.985722  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:03.986621  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.010941  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.016522  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.016524  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.016544  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.017123  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.017481  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.049863  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.050219  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.051146  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.051217  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.051423  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.053438  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.073140  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.250365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.173893  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.91058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.188349  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.216272  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.257607  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.273188  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.269585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.373579  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.587886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.473393  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.414726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.573277  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.314235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.673654  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.655413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.773815  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.561628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.780939  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.781110  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.781132  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.781316  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.781342  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.781342  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.781436  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.796580  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.719094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:45:04.798080  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.007217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:45:04.799545  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.022121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:45:04.857342  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.455476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.858879  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.044798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.860381  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.039609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.875327  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.184084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.974248  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.06825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:04.983620  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.983639  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.983804  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.984804  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.985887  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:04.986766  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.011198  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.016751  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.016753  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.016769  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.017291  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.017647  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.050049  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.050413  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.051918  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.051978  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.052948  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.053654  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.073338  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.364726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:05.173700  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.680404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:05.188553  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.216495  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.257896  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.273878  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.749787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:05.353148  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.001220361s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.353385  108295 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 04:45:05.353474  108295 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 04:45:05.353534  108295 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 04:45:05.353220  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.000428577s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.353704  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.000908575s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.353740  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.000953691s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.353763  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.000978291s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.356550  108295 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.321644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:05.356553  108295 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.324875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33624]
I0920 04:45:05.357055  108295 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0920 04:45:05.357084  108295 controller_utils.go:124] Update ready status of pods on node [node-1]
I0920 04:45:05.357291  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"dea41bc7-c172-4481-810f-f2a9e1d9d6c7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0920 04:45:05.357351  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.005203738s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.357391  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.0052456s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.357407  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.005263338s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.357427  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.005283501s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.358638  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (339.17µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33634]
I0920 04:45:05.358748  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (444.899µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33632]
I0920 04:45:05.358716  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (364.457µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33636]
I0920 04:45:05.358729  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (406.834µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33638]
I0920 04:45:05.359151  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.808382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33616]
I0920 04:45:05.361266  108295 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.978221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33630]
I0920 04:45:05.361641  108295 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0920 04:45:05.361672  108295 controller_utils.go:124] Update ready status of pods on node [node-0]
I0920 04:45:05.362016  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"afd51821-fdad-4c6a-b4b2-178b674e2761", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0920 04:45:05.363213  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (433.213µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I0920 04:45:05.363615  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-2 failed because of a conflict, going to retry
I0920 04:45:05.363730  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (376.417µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33646]
I0920 04:45:05.364542  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (4.387258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33632]
I0920 04:45:05.364759  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.892469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33640]
I0920 04:45:05.364882  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (4.066495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33634]
I0920 04:45:05.365096  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:45:05.35758644 +0000 UTC m=+366.150627695,}] Taint to Node node-2
I0920 04:45:05.365142  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:45:05.357532824 +0000 UTC m=+366.150574078,}] Taint to Node node-2
I0920 04:45:05.365450  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:45:05.357639467 +0000 UTC m=+366.150680716,}] Taint to Node node-1
I0920 04:45:05.365639  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (297.649µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33644]
I0920 04:45:05.365969  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (333.83µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33652]
I0920 04:45:05.365990  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (395.474µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33634]
I0920 04:45:05.367683  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (3.812746ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I0920 04:45:05.367932  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-0 failed because of a conflict, going to retry
I0920 04:45:05.367940  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.804749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33646]
I0920 04:45:05.367969  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-1 failed because of a conflict, going to retry
I0920 04:45:05.368161  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:45:05.36179971 +0000 UTC m=+366.154840962,}] Taint to Node node-0
I0920 04:45:05.368888  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (577.449µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33646]
I0920 04:45:05.369037  108295 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (6.816418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33642]
I0920 04:45:05.369191  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (4.879837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33650]
I0920 04:45:05.369263  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (6.591874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33630]
I0920 04:45:05.369530  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:45:05.361988803 +0000 UTC m=+366.155030096,}] Taint to Node node-0
I0920 04:45:05.369551  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:45:05.357676948 +0000 UTC m=+366.150718200,}] Taint to Node node-1
I0920 04:45:05.369617  108295 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (11.777751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33624]
I0920 04:45:05.369655  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.871619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33644]
I0920 04:45:05.369716  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-2 failed because of a conflict, going to retry
I0920 04:45:05.369692  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.017818774s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.369762  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.017891663s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.369818  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.016974723s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.369865  108295 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 04:45:05.369876  108295 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 04:45:05.369895  108295 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 04:45:05.369781  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.017911307s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.370088  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.018214311s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.370209  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (435.441µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33646]
I0920 04:45:05.370061  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (358.593µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I0920 04:45:05.370356  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.601146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33634]
I0920 04:45:05.370499  108295 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
I0920 04:45:05.370753  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
I0920 04:45:05.370846  108295 controller_utils.go:204] Added [] Taint to Node node-2
I0920 04:45:05.370582  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.392163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33654]
I0920 04:45:05.371215  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
I0920 04:45:05.371616  108295 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.4875ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33644]
I0920 04:45:05.371695  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (567.389µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33634]
I0920 04:45:05.371807  108295 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.490316ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33624]
E0920 04:45:05.371903  108295 node_lifecycle_controller.go:1037] Error updating node node-2: Operation cannot be fulfilled on nodes "node-2": the object has been modified; please apply your changes to the latest version and try again
I0920 04:45:05.372022  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
E0920 04:45:05.372114  108295 node_lifecycle_controller.go:1037] Error updating node node-1: Operation cannot be fulfilled on nodes "node-1": the object has been modified; please apply your changes to the latest version and try again
I0920 04:45:05.372817  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.727096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33646]
I0920 04:45:05.373286  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.202979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33644]
I0920 04:45:05.373350  108295 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
I0920 04:45:05.373403  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.543711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33648]
I0920 04:45:05.373631  108295 httplog.go:90] GET /api/v1/nodes/node-1: (1.414519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33654]
I0920 04:45:05.373698  108295 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
I0920 04:45:05.375742  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.66908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33644]
I0920 04:45:05.375939  108295 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:44:55 +0000 UTC,}] Taint
I0920 04:45:05.393689  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.040843177s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:05.393727  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.040886733s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:05.393738  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.040898454s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:05.393748  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.040908142s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:05.393993  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.042122702s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:05.394018  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.042149756s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:05.394039  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.042170393s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:05.394053  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.042185194s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:05.394102  108295 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:45:05.39408715 +0000 UTC m=+366.187128394. Adding it to the Taint queue.
I0920 04:45:05.394129  108295 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 04:45:05.394484  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (495.103µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33644]
I0920 04:45:05.394704  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (400.724µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33646]
I0920 04:45:05.397435  108295 store.go:362] GuaranteedUpdate of /4d70c1cd-a1c5-431b-9f2e-0f7371536e3a/minions/node-2 failed because of a conflict, going to retry
I0920 04:45:05.397674  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.268916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33644]
I0920 04:45:05.398090  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:45:05.398116  108295 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:45:05.398132  108295 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 04:45:05.398144  108295 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:05.398138994 +0000 UTC m=+366.191180243
I0920 04:45:05.398291  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:45:05.398307  108295 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:45:05.398319  108295 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 04:45:05.398327  108295 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:05.39832536 +0000 UTC m=+366.191366609
I0920 04:45:05.399655  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:45:05.399681  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (4.382385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33646]
I0920 04:45:05.399720  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:45:05.399726  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:45:00 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:45:05 +0000 UTC}]
I0920 04:45:05.399744  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:05.399736647 +0000 UTC m=+366.192777892 to be fired at 2019-09-20 04:45:05.399736647 +0000 UTC m=+366.192777892
I0920 04:45:05.399675  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:45:00 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:45:05 +0000 UTC}]
I0920 04:45:05.399782  108295 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:45:05.399797  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:05.399786603 +0000 UTC m=+366.192827851 to be fired at 2019-09-20 04:45:05.399786603 +0000 UTC m=+366.192827851
I0920 04:45:05.399850  108295 taint_manager.go:105] NoExecuteTaintManager is deleting Pod: taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:45:05.400004  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:45:05.393785743 +0000 UTC m=+366.186826981,}] Taint to Node node-2
I0920 04:45:05.400174  108295 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:45:05.400292  108295 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2
I0920 04:45:05.400670  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (286.075µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33658]
I0920 04:45:05.402149  108295 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/events/testpod-2.15c60bfcbfd5d385: (1.758159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:05.402169  108295 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2: (1.890716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33644]
I0920 04:45:05.402179  108295 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2: (1.887104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33646]
I0920 04:45:05.402371  108295 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/events/testpod-2.15c60bfcbfd30622: (1.934164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33660]
I0920 04:45:05.403901  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:45:05.403924  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:45:05 +0000 UTC}]
I0920 04:45:05.403904  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.736987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33658]
I0920 04:45:05.403947  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:45:05.403966  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:45:05 +0000 UTC}]
I0920 04:45:05.403989  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:05.403983635 +0000 UTC m=+366.197024881 to be fired at 2019-09-20 04:50:05.403983635 +0000 UTC m=+666.197024881
W0920 04:45:05.403998  108295 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2}. Skipping.
I0920 04:45:05.403948  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:05.403940179 +0000 UTC m=+366.196981427 to be fired at 2019-09-20 04:50:05.403940179 +0000 UTC m=+666.196981427
W0920 04:45:05.404017  108295 timed_workers.go:115] Trying to add already existing work for &{NamespacedName:taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2}. Skipping.
I0920 04:45:05.404168  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:45:05.404228  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.051509592s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.404249  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.051531912s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.404260  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.051542971s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.404268  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.051551573s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:45:05.405688  108295 httplog.go:90] PUT /api/v1/nodes/node-0/status: (1.212739ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
E0920 04:45:05.405841  108295 node_lifecycle_controller.go:1037] Error updating node node-0: Operation cannot be fulfilled on nodes "node-0": the object has been modified; please apply your changes to the latest version and try again
I0920 04:45:05.406878  108295 httplog.go:90] GET /api/v1/nodes/node-0: (894.728µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:05.427254  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.074532438s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:05.427288  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.074568891s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:05.427306  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.074587834s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:05.427336  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.074617324s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:05.427398  108295 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:45:05.427374868 +0000 UTC m=+366.220416125. Adding it to the Taint queue.
I0920 04:45:05.427425  108295 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 04:45:05.427902  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (291.362µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:05.473988  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.883261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:05.573815  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.737372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:05.673423  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.43407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:05.773426  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.467442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:05.781112  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.781274  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.781337  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.781422  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.781487  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.781511  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.781534  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.873687  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.70342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:05.974087  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.801098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:05.983791  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.983821  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.984041  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.985073  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.986062  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:05.986976  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.011493  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.017010  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.017050  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.017195  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.017510  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.017758  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.050239  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.050593  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.052029  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.052038  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.053110  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.053882  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.073516  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.563003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.173547  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.591885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.188746  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.216911  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.258084  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.273353  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.435083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.373643  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.642264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.474289  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.245793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.573736  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.767047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.674046  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.061011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.773674  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.59633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.781293  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.781539  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.781552  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.781565  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.781576  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.781592  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.781646  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.873708  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.657679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.973708  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.695618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:06.984009  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.984030  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.984219  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.985200  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.986225  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:06.987156  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.011660  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.017198  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.017222  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.017485  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.017717  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.017871  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.050396  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.050772  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.052190  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.052189  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.053256  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.054005  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.073603  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.586326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.173549  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.574696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.188981  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.217102  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.258258  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.274070  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.034133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.375423  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.39831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.473824  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.862422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.575729  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.621254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.676528  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.528874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.773481  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.418387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.781547  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.781677  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.781724  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.781757  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.781758  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.781773  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.781773  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.873622  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.580636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.976273  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.775277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:07.984199  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.984210  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.984486  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.985379  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.987567  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:07.989805  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.011842  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.017498  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.017513  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.017683  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.017941  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.017954  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.050615  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.050937  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.052353  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.052355  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.053472  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.054144  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.073621  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.620705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.173644  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.625874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.189150  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.217271  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.258708  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.273544  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.555271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.374375  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.160861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.474788  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.537453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.573253  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.314764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.654884  108295 httplog.go:90] GET /api/v1/namespaces/default: (2.294924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:45:08.656724  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.311407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:45:08.658402  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.165441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:45:08.674802  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.141702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.773786  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.700392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.781714  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.781842  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.781857  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.781908  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.781916  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.781923  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.781938  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.873153  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.21313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.974361  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.209723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:08.984503  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.984556  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.984835  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.985577  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.989989  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:08.990090  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.012059  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.017693  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.017699  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.017828  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.018086  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.018129  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.050807  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.051103  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.052503  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.052524  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.053642  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.054336  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.073776  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.72791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.173681  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.653223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.189314  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.217490  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.258904  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.273850  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.896389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.373690  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.639118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.474061  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.011629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.573700  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.71715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.673867  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.845885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.773807  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.768156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.781933  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.781959  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.782045  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.782057  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.782054  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.782079  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.782147  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.873702  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.694811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.878037  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.028284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:45:09.879366  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (913.895µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:45:09.880688  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (933.905µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:45:09.973419  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.413663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:09.984698  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.984702  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.985090  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.987869  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.990227  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:09.990378  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.012264  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.017856  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.017881  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.017909  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.018268  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.018290  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.051004  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.051260  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.052672  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.052701  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.053832  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.054564  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.073530  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.476774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.174283  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.241106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.189539  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.217684  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.259186  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.274376  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.231315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.373986  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.824152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.398216  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.046339235s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.398285  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.046416532s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.398298  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.046430608s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.398312  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.046444699s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.398362  108295 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:45:10.398348771 +0000 UTC m=+371.191390021. Adding it to the Taint queue.
I0920 04:45:10.398393  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.046475335s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.398406  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.046487772s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:10.398421  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.04650245s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:10.398478  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.0465317s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:10.398517  108295 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 04:45:10.398505512 +0000 UTC m=+371.191546768. Adding it to the Taint queue.
I0920 04:45:10.398551  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.046408405s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.398564  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.046422088s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.398575  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.046432814s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.398585  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.046442397s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.398612  108295 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:45:10.398605363 +0000 UTC m=+371.191646616. Adding it to the Taint queue.
I0920 04:45:10.428555  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.075824565s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.428615  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.075894991s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.428634  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.075916036s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.428659  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.075940878s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.428720  108295 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:45:10.428699817 +0000 UTC m=+371.221741070. Adding it to the Taint queue.
I0920 04:45:10.428753  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.075967325s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.428773  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.075987898s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.428787  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.076002751s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.428802  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.076016656s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.428836  108295 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:45:10.428824099 +0000 UTC m=+371.221865352. Adding it to the Taint queue.
I0920 04:45:10.428866  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.076025698s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:10.428903  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.076062193s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:10.428918  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.076076844s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:10.428930  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.076090109s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:10.428984  108295 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 04:45:10.428972814 +0000 UTC m=+371.222014066. Adding it to the Taint queue.
I0920 04:45:10.473854  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.62964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.573220  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.313017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.673884  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.850307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.774589  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.307933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.782141  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.782151  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.782196  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.782244  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.782270  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.782493  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.782510  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.874753  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.511414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.976175  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.716569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:10.984891  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.984901  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.985297  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.991183  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.991186  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:10.991350  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.012502  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.018068  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.018097  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.018116  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.018478  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.018528  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.051197  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.051464  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.052779  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.052882  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.054004  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.054708  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.073494  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.471057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.173244  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.321865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.189912  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.217899  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.259393  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.273583  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.568257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.373715  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.751281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.473493  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.534836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.573609  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.599279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.673750  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.739744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.773610  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.615463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.782324  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.782325  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.782358  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.782372  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.782385  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.782647  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.782671  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.874156  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.01289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.973644  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.632302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:11.985062  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.985064  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.985469  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.991409  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.991520  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:11.991521  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.012691  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.018191  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.018639  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.018930  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.018945  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.018935  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.051418  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.051657  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.052899  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.053036  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.054118  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.054870  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.073897  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.852429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.173796  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.791249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.190075  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.218123  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.259618  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.273762  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.718068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.373621  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.637417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.473753  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.729205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.573492  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.494845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.673437  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.474498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.773738  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.712771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.782436  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.782438  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.782449  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.782479  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.782471  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.782873  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.782988  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.873214  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.287094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.973663  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.588125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:12.985225  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.985232  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.985652  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.991681  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.991731  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:12.991730  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.012981  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.018390  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.018824  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.019048  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.019069  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.019080  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.051618  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.051858  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.053041  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.053183  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.054270  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.055072  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.074020  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.920443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.173891  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.887928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.190320  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.218298  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.260060  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.273552  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.530681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.373649  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.675862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.474497  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.363797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.579526  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.598931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.673653  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.633218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.773935  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.805876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.782912  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.782916  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.782927  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.782943  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.783152  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.783189  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.783189  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.873838  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.794234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.973644  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.667051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:13.985379  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.985379  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.985816  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.991925  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.992000  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:13.992017  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.014018  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.018581  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.019120  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.019203  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.019215  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.019237  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.051766  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.051989  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.053193  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.053406  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.054474  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.055230  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.074389  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.36686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.173877  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.851192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.190582  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.218486  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.260329  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.273483  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.460657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.373624  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.596303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.480978  108295 httplog.go:90] GET /api/v1/nodes/node-2: (8.067571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.573610  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.59855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.674020  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.939886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.773951  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.867368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.783059  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.783069  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.783100  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.783073  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.783332  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.783346  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.783326  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.796346  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.422278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:45:14.797812  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (983.048µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:45:14.799047  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (865.932µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:45:14.857480  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.42209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.858995  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.101676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.860215  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (890.568µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.873033  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.085961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.974292  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.995771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:14.985602  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.985615  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.986022  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.992180  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.992185  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:14.992351  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.014240  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.018890  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.019362  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.019450  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.019481  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.019485  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.052066  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.052194  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.053388  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.053667  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.054659  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.055426  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.073791  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.776504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.174051  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.725585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.190769  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.218838  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.260533  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.274023  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.88388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.374114  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.555786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.398915  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.047033219s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.399120  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.047249409s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.399206  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.047335393s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.399277  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.047406147s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.399590  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.04767007s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.399685  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.047765963s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:15.399723  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.047805692s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:15.399798  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.047877516s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:15.399913  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.047769191s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.399985  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.04783877s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.400017  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.047874453s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.400044  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.047901485s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.429897  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.077160201s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.429993  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.077274029s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.430011  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.077292822s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.430029  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.077311038s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.431301  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.078497449s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.431385  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.078598164s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.431409  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.078623594s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.431425  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.078639448s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.431590  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.078747127s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:15.431620  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.078779275s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:15.431639  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.078797442s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:15.431659  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.078817499s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:15.477987  108295 httplog.go:90] GET /api/v1/nodes/node-2: (5.029131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.573687  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.699819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.674086  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.045777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.773891  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.813522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.783237  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.783237  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.783346  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.783522  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.783545  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.783577  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.783579  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.877159  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.198756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.974600  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.458025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:15.985840  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.985851  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.986138  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.992369  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.992438  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:15.992622  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.014409  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.019192  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.019647  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.019657  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.019682  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.019689  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.052273  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.052285  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.053607  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.054200  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.054897  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.055581  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.076011  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.996912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.174095  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.771044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.191102  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.219056  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.260787  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.273861  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.798384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.373941  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.869806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.473683  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.673934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.573746  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.729123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.582280  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.345244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:45:16.583608  108295 httplog.go:90] GET /api/v1/namespaces/kube-public: (872.401µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:45:16.584828  108295 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (888.271µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:45:16.673448  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.4731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.775025  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.740938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.783429  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.783430  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.783504  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.783680  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.783795  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.783697  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.783739  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.874110  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.108572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.977052  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.895249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:16.986076  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.986491  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.986491  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.992796  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.992810  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:16.992926  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.014631  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.019391  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.019860  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.019875  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.019880  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.020054  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.052518  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.052753  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.053797  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.054360  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.055127  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.055739  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.073831  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.786612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.173699  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.665927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.191314  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.219241  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.261261  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.274537  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.362364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.373906  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.87972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.473747  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.732669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.573660  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.678494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.674480  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.34674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.774897  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.095114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.783741  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.783742  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.784057  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.784069  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.784097  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.784060  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.784069  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.874512  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.407494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.974394  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.023589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:17.986245  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.986709  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.986711  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.992954  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.992976  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:17.993065  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.014868  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.019567  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.020012  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.020042  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.020046  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.020275  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.052693  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.052978  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.053998  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.054547  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.055295  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.055896  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.075906  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.954025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.173936  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.817107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.191521  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.219654  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.261496  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.278935  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.220805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.377085  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.670286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.473715  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.681474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.573476  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.506545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.654725  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.612328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:45:18.656263  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.129898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:45:18.657706  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (984.143µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:45:18.673539  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.585518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.773773  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.713535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.783909  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.783913  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.784209  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.784212  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.784218  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.784278  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.784392  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.873758  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.699655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.973626  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.636951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:18.986398  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.986851  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.986864  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.993112  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.993118  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:18.993133  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.015021  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.019696  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.020225  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.020231  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.020343  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.020426  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.052785  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.053147  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.054130  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.054704  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.055473  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.056027  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.073273  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.34801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.177171  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.022206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.191689  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.220116  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.261724  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.273818  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.726101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.373987  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.887777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.473442  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.406381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.573977  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.905243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.673908  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.893075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.774389  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.182968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.784080  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.784082  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.784298  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.784317  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.784345  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.784439  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.784534  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.873686  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.675574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.878313  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.149301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:45:19.879746  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.005987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:45:19.880928  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (886.721µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57702]
I0920 04:45:19.973769  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.721907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:19.986612  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.987174  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.987188  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.993400  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.994000  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:19.994005  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.015195  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.019865  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.020371  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.020377  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.020514  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.020572  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.052891  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.053285  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.054256  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.054872  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.055622  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.056142  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.073236  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.361379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.173593  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.632416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.191946  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.220570  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.261861  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.273711  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.661684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.373398  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.468872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.400355  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.048204946s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.400409  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.048266315s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.400422  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.048279387s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.400433  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.048290208s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.400509  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.048641137s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.400522  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.048654926s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.400533  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.048664731s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.400551  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.048679505s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.400593  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.048674706s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.400617  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.048698413s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:20.400651  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.048731997s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:20.400671  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.048752288s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:20.431914  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.079188979s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.431960  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.079242446s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.431973  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.079255773s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.431988  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.079270831s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.432035  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.079250802s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.432045  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.079260921s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.432057  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.079272907s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.432079  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.079295704s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.432106  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.079266866s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:20.432116  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.079276392s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:20.432124  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.079285237s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:20.432133  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.079293603s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:20.473543  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.500203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.580055  108295 httplog.go:90] GET /api/v1/nodes/node-2: (7.122113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.674156  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.002324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.773346  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.388347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.784187  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.784340  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.784438  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.784442  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.784441  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.784566  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.784686  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.875957  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.432812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.974222  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.97006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:20.986939  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.987324  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.987415  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.994106  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.994154  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:20.994170  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.015389  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.020119  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.020583  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.020684  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.020687  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.020856  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.053078  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.053479  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.054426  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.055079  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.055815  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.056297  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.073836  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.760374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.173634  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.646966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.192153  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.220862  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.262122  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.274395  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.205673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.373754  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.706955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.473974  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.921887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.574055  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.724218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.673637  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.625356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.773973  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.915236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.784266  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.784516  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.784609  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.784625  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.784611  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.784689  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.784817  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.873666  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.639122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.977929  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.583034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:21.987161  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.987569  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.987668  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.994324  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.994329  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:21.994333  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.015540  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.020387  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.020742  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.020847  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.020879  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.021074  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.053379  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.053751  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.054706  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.055356  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.056051  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.056504  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.074707  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.659449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.178671  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.270059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.192382  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.221097  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.262382  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.273801  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.780818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.373870  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.83939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.473609  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.648912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.574439  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.129013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.673792  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.674138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.774188  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.0081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.784471  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.784716  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.784782  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.784805  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.784802  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.784818  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.785124  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.873580  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.613731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.976901  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.820231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:22.987593  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.987911  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.988062  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.994535  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.994780  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:22.994885  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.015696  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.020549  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.020920  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.021000  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.020974  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.021254  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.053699  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.053915  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.054945  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.055517  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.056207  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.056747  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.076284  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.183868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.173714  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.684085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.192587  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.221486  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.262585  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.273618  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.584597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.373904  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.835658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.474191  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.838998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.573765  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.691478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.673920  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.892694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.773361  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.307264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.784659  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.784844  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.784925  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.785003  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.785041  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.785246  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.785333  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.875121  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.783289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.974735  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.485184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:23.987752  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.988008  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.988308  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.994916  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.995096  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:23.995104  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.015861  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.020833  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.021197  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.021212  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.021233  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.021436  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.054246  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.054288  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.055207  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.055810  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.056665  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.056909  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.073644  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.563778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.173955  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.918726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.192752  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.221672  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.262842  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.273844  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.734131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.374605  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.467752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.478445  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.326968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.576115  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.465538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.674742  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.560676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.774109  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.728364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.784824  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.784994  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.785070  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.785218  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.785337  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.785340  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.785487  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.796279  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.31643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:45:24.797640  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (956.852µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:45:24.798980  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.009202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:45:24.861044  108295 httplog.go:90] GET /api/v1/namespaces/default: (3.749046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.864266  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.306269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.865855  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.024829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.874302  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.28505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.973805  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.738585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:24.988674  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.989194  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.989208  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.995502  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.995524  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:24.995878  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.016069  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.021106  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.021353  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.021365  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.021391  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.021551  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.054481  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.054485  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.055507  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.056111  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.056897  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.057086  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.073781  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.718257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:25.173910  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.804722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:25.192961  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.221869  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.262989  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.273642  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.61357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:25.375173  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.74735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:25.400965  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.048811566s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.401043  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.048896972s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.401066  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.048923197s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.401087  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.048944014s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.401190  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.049321491s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.401213  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.049344389s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.401233  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.049365387s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.401246  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.04937838s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.401308  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.049389622s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.401345  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.049426195s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:25.401358  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.049441186s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:25.401383  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.049464628s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:25.432399  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.079666195s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.432487  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.079766507s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.432519  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.079801364s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.432530  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.079812945s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.432601  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.079816865s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.432613  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.079829031s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.432623  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.079839053s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.432633  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.07984944s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.432664  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.07982398s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:45:25.432695  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.079854969s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:25.432704  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.079865006s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:25.432716  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.079876123s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:44:55 +0000 UTC,LastTransitionTime:2019-09-20 04:45:05 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:45:25.473667  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.529687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:25.475317  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.177237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
Sep 20 04:45:25.475: INFO: Waiting up to 15s for pod "testpod-2" in namespace "taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129" to be "terminating"
I0920 04:45:25.476856  108295 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2: (1.051718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
Sep 20 04:45:25.477: INFO: Pod "testpod-2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.655107ms
Sep 20 04:45:25.477: INFO: Pod "testpod-2" satisfied condition "terminating"
I0920 04:45:25.481577  108295 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2: (4.042314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:25.481727  108295 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2"}
I0920 04:45:25.481749  108295 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:25.48174617 +0000 UTC m=+386.274787416
I0920 04:45:25.481772  108295 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129", Name:"testpod-2"}
I0920 04:45:25.481793  108295 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/testpod-2 at 2019-09-20 04:45:25.481789817 +0000 UTC m=+386.274831073
I0920 04:45:25.484021  108295 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa3942f40-2d3c-4209-a0b3-f0d7dfbc0129/pods/testpod-2: (940.654µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:25.491063  108295 node_tree.go:113] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I0920 04:45:25.491150  108295 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 04:45:25.491178  108295 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 04:45:25.497377  108295 node_tree.go:113] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I0920 04:45:25.497407  108295 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 04:45:25.497486  108295 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 04:45:25.503230  108295 httplog.go:90] DELETE /api/v1/nodes: (18.646983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33656]
I0920 04:45:25.503289  108295 node_tree.go:113] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I0920 04:45:25.503435  108295 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 04:45:25.503435  108295 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 04:45:25.785033  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.785139  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.785167  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.785382  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.785575  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.785687  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.785706  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.989447  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.989447  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.989447  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.995675  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.995687  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:25.996160  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.016208  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.021288  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.021586  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.021605  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.021608  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.021703  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.054644  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.054654  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.055632  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.056297  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.057054  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.057289  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.193173  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.222078  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:45:26.263146  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_0_tolerationseconds (35.08s)
        taint_test.go:782: Failed to taint node in test 2 <node-2>, err: timed out waiting for the condition

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-043242.xml

Find update mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds
W0920 04:43:41.331740  108295 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 04:43:41.331767  108295 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 04:43:41.331782  108295 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 04:43:41.331793  108295 master.go:259] Using reconciler: 
I0920 04:43:41.333718  108295 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.333950  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.334108  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.334812  108295 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 04:43:41.334864  108295 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.334883  108295 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 04:43:41.335154  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.335196  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.336349  108295 watch_cache.go:405] Replace watchCache (rev: 59296) 
I0920 04:43:41.336786  108295 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:43:41.336814  108295 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.336851  108295 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:43:41.336942  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.336973  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.337704  108295 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 04:43:41.337750  108295 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 04:43:41.337743  108295 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.337859  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.337875  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.338260  108295 watch_cache.go:405] Replace watchCache (rev: 59296) 
I0920 04:43:41.338956  108295 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 04:43:41.339009  108295 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 04:43:41.339171  108295 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.339275  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.339299  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.339386  108295 watch_cache.go:405] Replace watchCache (rev: 59296) 
I0920 04:43:41.339734  108295 watch_cache.go:405] Replace watchCache (rev: 59296) 
I0920 04:43:41.339824  108295 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 04:43:41.339854  108295 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 04:43:41.340004  108295 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.340281  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.340306  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.341155  108295 watch_cache.go:405] Replace watchCache (rev: 59297) 
I0920 04:43:41.341392  108295 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 04:43:41.341570  108295 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.341714  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.341743  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.341865  108295 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 04:43:41.343719  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.345339  108295 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 04:43:41.345380  108295 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 04:43:41.345518  108295 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.345638  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.345671  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.346886  108295 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 04:43:41.347392  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.347558  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.347585  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.346968  108295 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 04:43:41.349140  108295 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 04:43:41.349350  108295 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.349522  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.349543  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.349624  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.349635  108295 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 04:43:41.350578  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.351127  108295 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 04:43:41.351183  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.351322  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.351391  108295 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 04:43:41.351435  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.351487  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.352332  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.352599  108295 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 04:43:41.352689  108295 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 04:43:41.353049  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.353260  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.354041  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.354219  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.359376  108295 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 04:43:41.359597  108295 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.359740  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.359767  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.359859  108295 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 04:43:41.360993  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.361487  108295 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 04:43:41.361671  108295 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 04:43:41.362096  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.362321  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.362410  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.362416  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.363834  108295 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 04:43:41.363862  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.363909  108295 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 04:43:41.363956  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.363970  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.364886  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.364977  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.365003  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.366279  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.366391  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.366416  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.369190  108295 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 04:43:41.369576  108295 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 04:43:41.369627  108295 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 04:43:41.370338  108295 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.370589  108295 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.371199  108295 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.371735  108295 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.372433  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.372843  108295 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.373393  108295 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.373882  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.373971  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.374139  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.374663  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.375239  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.375505  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.376379  108295 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.376691  108295 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.377290  108295 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.377579  108295 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.378298  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.378516  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.378766  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.378925  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.379135  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.379297  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.379447  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.380310  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.380565  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.381403  108295 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.382357  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.382630  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.382945  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.383983  108295 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.384434  108295 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.385651  108295 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.386484  108295 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.387181  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.388317  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.388595  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.388712  108295 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 04:43:41.388735  108295 master.go:461] Enabling API group "authentication.k8s.io".
I0920 04:43:41.388757  108295 master.go:461] Enabling API group "authorization.k8s.io".
I0920 04:43:41.388894  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.389030  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.389053  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.389832  108295 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:43:41.389913  108295 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:43:41.390247  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.390493  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.390512  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.391176  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.391414  108295 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:43:41.391550  108295 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:43:41.391760  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.391967  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.392050  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.392546  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.392738  108295 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:43:41.392758  108295 master.go:461] Enabling API group "autoscaling".
I0920 04:43:41.392893  108295 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:43:41.392909  108295 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.393035  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.393068  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.393941  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.394061  108295 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 04:43:41.394103  108295 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 04:43:41.394241  108295 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.394367  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.394396  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.394750  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.395028  108295 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 04:43:41.395078  108295 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 04:43:41.395090  108295 master.go:461] Enabling API group "batch".
I0920 04:43:41.395334  108295 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.395446  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.395621  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.396031  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.396403  108295 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 04:43:41.396419  108295 master.go:461] Enabling API group "certificates.k8s.io".
I0920 04:43:41.396524  108295 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 04:43:41.396571  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.396664  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.396675  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.397394  108295 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:43:41.397533  108295 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:43:41.397613  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.397685  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.397819  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.397838  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.398085  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.398319  108295 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:43:41.398341  108295 master.go:461] Enabling API group "coordination.k8s.io".
I0920 04:43:41.398357  108295 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 04:43:41.398430  108295 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:43:41.398567  108295 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.398689  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.398720  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.399277  108295 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:43:41.399300  108295 master.go:461] Enabling API group "extensions".
I0920 04:43:41.399314  108295 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:43:41.399497  108295 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.399613  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.399626  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.399785  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.400165  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.400529  108295 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0920 04:43:41.400662  108295 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.400763  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.400801  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.400886  108295 reflector.go:153] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0920 04:43:41.401410  108295 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:43:41.401508  108295 master.go:461] Enabling API group "networking.k8s.io".
I0920 04:43:41.401547  108295 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.401562  108295 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:43:41.401643  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.401659  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.401800  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.402051  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.402447  108295 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0920 04:43:41.402507  108295 master.go:461] Enabling API group "node.k8s.io".
I0920 04:43:41.402658  108295 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.402733  108295 reflector.go:153] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0920 04:43:41.402834  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.402855  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.403961  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.404548  108295 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0920 04:43:41.404605  108295 reflector.go:153] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0920 04:43:41.404724  108295 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.404815  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.404833  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.406081  108295 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0920 04:43:41.406112  108295 master.go:461] Enabling API group "policy".
I0920 04:43:41.406163  108295 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.406269  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.406326  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.406518  108295 reflector.go:153] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0920 04:43:41.407232  108295 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:43:41.407349  108295 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.407431  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.407443  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.407576  108295 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:43:41.408860  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.409799  108295 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:43:41.409821  108295 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.409903  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.409913  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.409960  108295 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:43:41.410595  108295 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:43:41.410710  108295 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:43:41.410795  108295 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.411000  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.411027  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.411254  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.411535  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.411686  108295 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:43:41.411725  108295 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.411848  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.411879  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.411878  108295 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:43:41.412784  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.412806  108295 reflector.go:153] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0920 04:43:41.412792  108295 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0920 04:43:41.413001  108295 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.413105  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.413123  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.413567  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.413858  108295 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0920 04:43:41.413882  108295 reflector.go:153] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0920 04:43:41.413976  108295 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.414079  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.414350  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.414299  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.414978  108295 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0920 04:43:41.415234  108295 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.415275  108295 reflector.go:153] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0920 04:43:41.415363  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.415377  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.415834  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.416811  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.417056  108295 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0920 04:43:41.417082  108295 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0920 04:43:41.417107  108295 reflector.go:153] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0920 04:43:41.417816  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.418309  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.418937  108295 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.419042  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.419064  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.419616  108295 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:43:41.419644  108295 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:43:41.419780  108295 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.419891  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.419915  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.420424  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.420646  108295 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0920 04:43:41.420669  108295 master.go:461] Enabling API group "scheduling.k8s.io".
I0920 04:43:41.420700  108295 reflector.go:153] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0920 04:43:41.420755  108295 master.go:450] Skipping disabled API group "settings.k8s.io".
I0920 04:43:41.420930  108295 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.421037  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.421053  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.421417  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.422035  108295 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:43:41.422098  108295 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:43:41.422374  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.422551  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.422627  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.422874  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.423674  108295 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:43:41.423734  108295 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:43:41.423772  108295 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.424101  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.424133  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.424738  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.425240  108295 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0920 04:43:41.425282  108295 reflector.go:153] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0920 04:43:41.425495  108295 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.425980  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.426233  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.426335  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.427054  108295 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0920 04:43:41.427141  108295 reflector.go:153] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0920 04:43:41.427379  108295 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.427617  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.427702  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.428625  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.429156  108295 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0920 04:43:41.429297  108295 reflector.go:153] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0920 04:43:41.429614  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.429719  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.429743  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.430607  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.430992  108295 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0920 04:43:41.431074  108295 reflector.go:153] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0920 04:43:41.431116  108295 master.go:461] Enabling API group "storage.k8s.io".
I0920 04:43:41.431545  108295 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.431682  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.431707  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.432088  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.432528  108295 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0920 04:43:41.432699  108295 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.432732  108295 reflector.go:153] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0920 04:43:41.432812  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.432828  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.433450  108295 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0920 04:43:41.433546  108295 reflector.go:153] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0920 04:43:41.433684  108295 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.433842  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.433848  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.433863  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.435058  108295 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0920 04:43:41.435196  108295 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.435210  108295 reflector.go:153] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0920 04:43:41.435283  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.435298  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.435680  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.435968  108295 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0920 04:43:41.435996  108295 reflector.go:153] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0920 04:43:41.436187  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.436097  108295 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.436293  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.436306  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.437287  108295 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0920 04:43:41.437315  108295 master.go:461] Enabling API group "apps".
I0920 04:43:41.437347  108295 reflector.go:153] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0920 04:43:41.437347  108295 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.437415  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.437549  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.437567  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.438146  108295 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 04:43:41.438187  108295 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.438297  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.438323  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.438408  108295 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 04:43:41.439033  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.439182  108295 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 04:43:41.439216  108295 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.439326  108295 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 04:43:41.439368  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.439386  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.440129  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.440203  108295 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0920 04:43:41.440258  108295 reflector.go:153] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0920 04:43:41.440382  108295 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.441050  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.441162  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.441318  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.442241  108295 watch_cache.go:405] Replace watchCache (rev: 59298) 
I0920 04:43:41.442510  108295 reflector.go:153] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0920 04:43:41.442772  108295 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0920 04:43:41.442838  108295 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0920 04:43:41.442964  108295 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.443353  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:41.443530  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:41.443789  108295 watch_cache.go:405] Replace watchCache (rev: 59299) 
I0920 04:43:41.444476  108295 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:43:41.444502  108295 master.go:461] Enabling API group "events.k8s.io".
I0920 04:43:41.444649  108295 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:43:41.444971  108295 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.445209  108295 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.445600  108295 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.445721  108295 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.445865  108295 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.445992  108295 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.446491  108295 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.446605  108295 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.446714  108295 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.446800  108295 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.447753  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.448001  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.449642  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.449786  108295 watch_cache.go:405] Replace watchCache (rev: 59300) 
I0920 04:43:41.449962  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.451335  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.451640  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.452548  108295 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.452791  108295 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.453475  108295 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.453739  108295 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:43:41.453780  108295 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0920 04:43:41.454482  108295 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.454672  108295 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.454894  108295 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.455813  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.456628  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.457475  108295 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.457705  108295 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.458348  108295 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.459144  108295 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.459417  108295 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.459940  108295 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:43:41.460000  108295 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0920 04:43:41.460873  108295 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.461100  108295 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.461613  108295 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.462215  108295 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.462737  108295 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.463431  108295 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.464057  108295 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.464748  108295 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.465375  108295 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.466006  108295 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.466512  108295 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:43:41.466567  108295 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0920 04:43:41.467057  108295 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.467698  108295 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:43:41.467751  108295 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0920 04:43:41.468314  108295 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.468887  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.469142  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.469731  108295 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.470214  108295 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.470771  108295 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.471338  108295 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:43:41.471389  108295 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0920 04:43:41.472271  108295 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.472915  108295 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.473180  108295 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.474065  108295 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.474267  108295 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.474559  108295 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.475195  108295 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.475527  108295 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.475732  108295 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.476482  108295 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.476679  108295 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.476902  108295 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0920 04:43:41.476964  108295 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0920 04:43:41.476969  108295 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0920 04:43:41.477684  108295 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.478373  108295 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.478966  108295 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.479472  108295 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.480239  108295 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"440ab346-95db-43ac-9500-94e7a7e0cd5f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:43:41.483505  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.483526  108295 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0920 04:43:41.483534  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.483541  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.483546  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.483552  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.483580  108295 httplog.go:90] GET /healthz: (172.733µs) 0 [Go-http-client/1.1 127.0.0.1:38898]
I0920 04:43:41.485014  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.137292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38898]
I0920 04:43:41.487096  108295 httplog.go:90] GET /api/v1/services: (944.36µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38898]
I0920 04:43:41.490057  108295 httplog.go:90] GET /api/v1/services: (733.016µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38898]
I0920 04:43:41.491838  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.491868  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.491880  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.491889  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.491896  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.491917  108295 httplog.go:90] GET /healthz: (169.718µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38898]
I0920 04:43:41.492750  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (891.415µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38900]
I0920 04:43:41.493087  108295 httplog.go:90] GET /api/v1/services: (789.777µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38898]
I0920 04:43:41.494007  108295 httplog.go:90] GET /api/v1/services: (1.008198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38900]
I0920 04:43:41.494592  108295 httplog.go:90] POST /api/v1/namespaces: (1.275062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38898]
I0920 04:43:41.495826  108295 httplog.go:90] GET /api/v1/namespaces/kube-public: (601.526µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:41.497497  108295 httplog.go:90] POST /api/v1/namespaces: (1.373684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:41.498646  108295 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (856.456µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:41.499932  108295 httplog.go:90] POST /api/v1/namespaces: (996.762µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:41.584512  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.584709  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.584791  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.584857  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.584918  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.585073  108295 httplog.go:90] GET /healthz: (749.976µs) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:41.592727  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.592863  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.592978  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.593054  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.593103  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.593266  108295 httplog.go:90] GET /healthz: (632.517µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:41.684282  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.684326  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.684337  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.684343  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.684349  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.684377  108295 httplog.go:90] GET /healthz: (206.977µs) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:41.692598  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.692627  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.692635  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.692641  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.692647  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.692685  108295 httplog.go:90] GET /healthz: (195.285µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:41.750625  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:41.751434  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:41.751612  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:41.752940  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:41.754083  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:41.755588  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:41.755974  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:41.784513  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.784678  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.784727  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.784754  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.784782  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.784890  108295 httplog.go:90] GET /healthz: (545.655µs) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:41.792758  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.792899  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.792944  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.792990  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.793022  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.793188  108295 httplog.go:90] GET /healthz: (555.886µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:41.884335  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.884598  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.884695  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.884768  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.884838  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.885000  108295 httplog.go:90] GET /healthz: (808.688µs) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:41.892544  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.892568  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.892579  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.892589  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.892597  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.892624  108295 httplog.go:90] GET /healthz: (190.615µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:41.984289  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.984322  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.984332  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.984338  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.984344  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.984392  108295 httplog.go:90] GET /healthz: (233.624µs) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:41.992666  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:41.992788  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:41.992832  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:41.992868  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:41.992898  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:41.993100  108295 httplog.go:90] GET /healthz: (549.061µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:42.084302  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:42.084335  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.084361  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:42.084373  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:42.084384  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:42.084429  108295 httplog.go:90] GET /healthz: (281.331µs) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:42.092584  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:42.092615  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.092624  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:42.092630  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:42.092635  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:42.092673  108295 httplog.go:90] GET /healthz: (193.618µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:42.184221  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:42.184255  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.184266  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:42.184275  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:42.184283  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:42.184310  108295 httplog.go:90] GET /healthz: (216.128µs) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:42.192900  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:42.192980  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.192993  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:42.193044  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:42.193058  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:42.193085  108295 httplog.go:90] GET /healthz: (438.73µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:42.284304  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:42.284338  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.284349  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:42.284357  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:42.284364  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:42.284415  108295 httplog.go:90] GET /healthz: (232.677µs) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:42.292709  108295 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0920 04:43:42.292737  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.292749  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:42.292758  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:42.292778  108295 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:42.292817  108295 httplog.go:90] GET /healthz: (239.139µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:42.331760  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:43:42.331861  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:43:42.385392  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.385463  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:42.385472  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:42.385478  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:42.385524  108295 httplog.go:90] GET /healthz: (1.288628ms) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:42.393490  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.393530  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:42.393540  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:42.393548  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:42.393587  108295 httplog.go:90] GET /healthz: (1.080155ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:42.484443  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (907.408µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38902]
I0920 04:43:42.484712  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.152424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38904]
I0920 04:43:42.486126  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.486151  108295 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0920 04:43:42.486159  108295 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0920 04:43:42.486165  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0920 04:43:42.486178  108295 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.661412ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38920]
I0920 04:43:42.486189  108295 httplog.go:90] GET /healthz: (1.312379ms) 0 [Go-http-client/1.1 127.0.0.1:38902]
I0920 04:43:42.486424  108295 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.191353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.486523  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.504828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38904]
I0920 04:43:42.487744  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (812.708µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38904]
I0920 04:43:42.488306  108295 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.484187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.488616  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (572.061µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38904]
I0920 04:43:42.488679  108295 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.104744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38918]
I0920 04:43:42.488896  108295 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0920 04:43:42.490370  108295 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (917.041µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.490644  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.734574ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.492214  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.277288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.492277  108295 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.517765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.492441  108295 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0920 04:43:42.492480  108295 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0920 04:43:42.493211  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.493242  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.493281  108295 httplog.go:90] GET /healthz: (840.994µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.493616  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.097491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.495163  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.145665ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.496613  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (761.339µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.498056  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.063304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.500123  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.572406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.500348  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0920 04:43:42.501205  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (569.884µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.503080  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.503292  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0920 04:43:42.504219  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (764.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.505987  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.302043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.506328  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0920 04:43:42.507320  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (738.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.509361  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.376062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.509607  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0920 04:43:42.510636  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (778.957µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.512541  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.533665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.513200  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0920 04:43:42.515302  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.379497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.517941  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.009882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.518267  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0920 04:43:42.519540  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (951.673µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.521754  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.460156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.522087  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0920 04:43:42.523350  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (954.516µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.525658  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.802434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.525806  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0920 04:43:42.527539  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.484916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.530123  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.875773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.530361  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0920 04:43:42.532409  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.825405ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.534308  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.455449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.534606  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0920 04:43:42.535521  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (753.656µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.537392  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.537562  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0920 04:43:42.538588  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (858.498µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.541106  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.152536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.541672  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0920 04:43:42.543082  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (917.596µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.545061  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.523239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.545301  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0920 04:43:42.546421  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (888.007µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.549143  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.257224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.549378  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0920 04:43:42.550464  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (871.539µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.552549  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.652079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.552778  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0920 04:43:42.553758  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (788.571µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.555220  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.172141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.555511  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0920 04:43:42.557015  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (914.534µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.559373  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.002667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.559597  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0920 04:43:42.560524  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (777.768µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.563724  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.972176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.564363  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 04:43:42.565720  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.010553ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.567211  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.170775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.567448  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0920 04:43:42.568639  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (988.549µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.570818  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.601442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.571030  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0920 04:43:42.572050  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (850.953µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.574998  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.00125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.575166  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0920 04:43:42.576844  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.458221ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.578866  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.538646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.579058  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0920 04:43:42.580178  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (743.465µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.582614  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.006181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.582912  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0920 04:43:42.583900  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (800.01µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.584712  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.584831  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.585125  108295 httplog.go:90] GET /healthz: (1.163512ms) 0 [Go-http-client/1.1 127.0.0.1:38924]
I0920 04:43:42.585776  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.54364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.586048  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0920 04:43:42.587002  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (749.393µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.589173  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.816096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.589403  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0920 04:43:42.591652  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.976499ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.593507  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.593523  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.593544  108295 httplog.go:90] GET /healthz: (1.038655ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.595924  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.829534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.596396  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0920 04:43:42.597237  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (647.807µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.598867  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.311846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.599724  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0920 04:43:42.600990  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (738.734µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.603578  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.031928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.604043  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 04:43:42.605178  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (952.029µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.606705  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.062933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.607098  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 04:43:42.608416  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (989.089µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.611736  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.617471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.612132  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 04:43:42.613432  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.002049ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.614950  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.118672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.615280  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 04:43:42.616625  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.204752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.619234  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.619567  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 04:43:42.620995  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (830.45µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.623192  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.754877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.623478  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 04:43:42.624515  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (784.963µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.626777  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.393556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.627013  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 04:43:42.628408  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (800.619µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.631219  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.432649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.632962  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 04:43:42.634200  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.011968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.636696  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.961549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.637154  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 04:43:42.638644  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (814.078µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.640683  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.659012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.640997  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 04:43:42.641894  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (660.333µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.643843  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.644601  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0920 04:43:42.646317  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (823.428µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.648986  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.091101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.649175  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 04:43:42.650102  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (758.519µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.651926  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.358328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.652116  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0920 04:43:42.653002  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (697.927µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.654967  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.605455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.655421  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 04:43:42.657302  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (725.587µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.658940  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.1516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.659094  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 04:43:42.659982  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (712.247µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.662049  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.692658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.662280  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 04:43:42.663210  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (627.496µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.664842  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.226932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.665177  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 04:43:42.666049  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (669.192µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.667779  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.23808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.668188  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 04:43:42.669076  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (710.506µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.670813  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.298848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.671027  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0920 04:43:42.671912  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (686.697µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.673592  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.21961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.673820  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 04:43:42.674767  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (753.217µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.677546  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.662531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.677798  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0920 04:43:42.678683  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (704.948µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.680237  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.238823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.680470  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 04:43:42.681300  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (690.568µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.682747  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.074576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.682939  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 04:43:42.684186  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (645.355µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.684618  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.684649  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.684670  108295 httplog.go:90] GET /healthz: (660.568µs) 0 [Go-http-client/1.1 127.0.0.1:38924]
I0920 04:43:42.693078  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.693120  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.693171  108295 httplog.go:90] GET /healthz: (719.459µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.705109  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.486278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.705311  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 04:43:42.724957  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.260245ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.745694  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.942123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.745944  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 04:43:42.750872  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:42.751607  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:42.751792  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:42.753141  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:42.754248  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:42.755698  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:42.756172  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:42.764729  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.046245ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.784836  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.784863  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.784895  108295 httplog.go:90] GET /healthz: (865.261µs) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:42.785858  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.254163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.786107  108295 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 04:43:42.793446  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.793503  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.793534  108295 httplog.go:90] GET /healthz: (1.033952ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.804748  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.172885ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.826013  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.324046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.826229  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0920 04:43:42.846925  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.149175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.865940  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.323473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.866206  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0920 04:43:42.884813  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.203397ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:42.885008  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.885046  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.885075  108295 httplog.go:90] GET /healthz: (1.033149ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:42.893471  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.893509  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.893549  108295 httplog.go:90] GET /healthz: (1.03364ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.905683  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.987498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.905904  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0920 04:43:42.925186  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.423469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.945602  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.85718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.946010  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0920 04:43:42.964986  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.207382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.984862  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.984904  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.984958  108295 httplog.go:90] GET /healthz: (890.229µs) 0 [Go-http-client/1.1 127.0.0.1:38924]
I0920 04:43:42.985667  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.831597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:42.985988  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0920 04:43:42.994548  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:42.994587  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:42.994626  108295 httplog.go:90] GET /healthz: (1.704927ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.005025  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.355562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.025545  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.878148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.025840  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0920 04:43:43.044826  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.194099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.065331  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.669399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.065759  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0920 04:43:43.084899  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.244133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.085050  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.085218  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.085371  108295 httplog.go:90] GET /healthz: (1.320074ms) 0 [Go-http-client/1.1 127.0.0.1:38924]
I0920 04:43:43.093347  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.093487  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.093696  108295 httplog.go:90] GET /healthz: (1.147646ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.107050  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.271723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.107309  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0920 04:43:43.125339  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.44067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.146361  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.451928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.146734  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0920 04:43:43.165353  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.54195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.184955  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.185021  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.185066  108295 httplog.go:90] GET /healthz: (966.654µs) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:43.185490  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.862169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.185741  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0920 04:43:43.193129  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.193155  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.193194  108295 httplog.go:90] GET /healthz: (667.568µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.204509  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (906.882µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.225207  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.577613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.225490  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0920 04:43:43.245589  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.97016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.265108  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.511911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.265369  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0920 04:43:43.284610  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.284699  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.284727  108295 httplog.go:90] GET /healthz: (696.051µs) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:43.284906  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.322528ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.293224  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.293357  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.293537  108295 httplog.go:90] GET /healthz: (1.009857ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.305007  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.418061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.305206  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0920 04:43:43.324375  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (733.853µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.345588  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.988154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.345912  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0920 04:43:43.365211  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.431867ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.385172  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.385313  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.385554  108295 httplog.go:90] GET /healthz: (1.417372ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:43.386006  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.220668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.386252  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0920 04:43:43.393901  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.393956  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.393996  108295 httplog.go:90] GET /healthz: (1.211859ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.405507  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.710005ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.426972  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.202417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.427385  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0920 04:43:43.445540  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.774712ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.466244  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.463102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.466583  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0920 04:43:43.485077  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.304498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.485527  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.485609  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.485705  108295 httplog.go:90] GET /healthz: (1.629335ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:43.493580  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.493729  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.493870  108295 httplog.go:90] GET /healthz: (1.310894ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.505746  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.09505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.506048  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0920 04:43:43.524662  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.0796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.545498  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.790599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.545989  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0920 04:43:43.565105  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.265039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.585240  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.585274  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.585318  108295 httplog.go:90] GET /healthz: (1.251781ms) 0 [Go-http-client/1.1 127.0.0.1:38924]
I0920 04:43:43.585808  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.987565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.586075  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0920 04:43:43.593295  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.593322  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.593354  108295 httplog.go:90] GET /healthz: (898.537µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.605136  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.389766ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.626017  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.294913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.626310  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0920 04:43:43.644982  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.255423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.666012  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.323286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.666262  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0920 04:43:43.685879  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.686051  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.685884  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.131782ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.686374  108295 httplog.go:90] GET /healthz: (2.287508ms) 0 [Go-http-client/1.1 127.0.0.1:38924]
I0920 04:43:43.693942  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.693975  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.694013  108295 httplog.go:90] GET /healthz: (1.402424ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.705886  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.083019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.706114  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0920 04:43:43.724674  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (984.084µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.745728  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.115325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.745978  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0920 04:43:43.751071  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:43.751736  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:43.751964  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:43.753302  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:43.754379  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:43.755874  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:43.756328  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:43.765402  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.632539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.785767  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.965302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.786071  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.786170  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.786298  108295 httplog.go:90] GET /healthz: (2.240383ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:43.786489  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0920 04:43:43.794063  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.794110  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.794155  108295 httplog.go:90] GET /healthz: (1.48255ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.805348  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.526629ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.825866  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.142039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.826097  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0920 04:43:43.845189  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.479675ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.865768  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.050036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.865968  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0920 04:43:43.885281  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.285066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:43.885627  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.885649  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.885682  108295 httplog.go:90] GET /healthz: (1.288817ms) 0 [Go-http-client/1.1 127.0.0.1:38924]
I0920 04:43:43.893426  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.893500  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.893552  108295 httplog.go:90] GET /healthz: (1.128264ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.905495  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.839222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.905713  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0920 04:43:43.924849  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.169941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.945416  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.736941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.945617  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0920 04:43:43.964900  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.262121ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.985141  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.985179  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.985212  108295 httplog.go:90] GET /healthz: (1.089534ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:43.985357  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.756645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:43.985727  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0920 04:43:43.993258  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:43.993286  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:43.993314  108295 httplog.go:90] GET /healthz: (808.282µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.004616  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.018796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.025409  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.756954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.025680  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0920 04:43:44.044862  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.223214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.056834  108295 node_lifecycle_controller.go:718] Controller observed a Node deletion: node-1
I0920 04:43:44.056865  108295 controller_utils.go:168] Recording Removing Node node-1 from Controller event message for node node-1
I0920 04:43:44.056946  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"7e9a2d10-1d03-40fa-86fe-3a32fa34eb7c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I0920 04:43:44.058974  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.617608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:44.064923  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.344934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.065137  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0920 04:43:44.084679  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.093636ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.085210  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.085235  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.085282  108295 httplog.go:90] GET /healthz: (855.823µs) 0 [Go-http-client/1.1 127.0.0.1:38924]
I0920 04:43:44.093441  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.093488  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.093530  108295 httplog.go:90] GET /healthz: (1.106908ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.107261  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.300994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.107824  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0920 04:43:44.124948  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.239713ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.145445  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.797521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.145650  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0920 04:43:44.164835  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.153505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.184877  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.184906  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.184959  108295 httplog.go:90] GET /healthz: (959.991µs) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:44.185658  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.905546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.185870  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0920 04:43:44.193818  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.193959  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.194105  108295 httplog.go:90] GET /healthz: (1.500475ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.205283  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.400136ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.227075  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.989494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.227496  108295 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0920 04:43:44.245312  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.526139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.247438  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.519991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.265830  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.10331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.266070  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0920 04:43:44.284808  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.157165ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.285618  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.285655  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.285687  108295 httplog.go:90] GET /healthz: (1.395585ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:44.286586  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.290907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.293319  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.293544  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.293680  108295 httplog.go:90] GET /healthz: (977.65µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.305346  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.755219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.305647  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 04:43:44.324715  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.077512ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.326227  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.093178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.345571  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.827292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.345888  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 04:43:44.366001  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.261766ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.368129  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.48772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.385188  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.385229  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.385263  108295 httplog.go:90] GET /healthz: (1.152835ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:44.386258  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.521295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.386671  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 04:43:44.394170  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.394410  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.394740  108295 httplog.go:90] GET /healthz: (1.963151ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.405283  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.540518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.407717  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.768488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.426147  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.396762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.426631  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 04:43:44.445383  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.634649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.448625  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.421354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.468675  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.257504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.468961  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 04:43:44.485136  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.485196  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.485237  108295 httplog.go:90] GET /healthz: (1.137619ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:44.486071  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.030144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.488229  108295 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.47037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.493260  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.493414  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.493574  108295 httplog.go:90] GET /healthz: (1.070311ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.505257  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.620568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.505586  108295 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 04:43:44.525317  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.366763ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.527838  108295 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.803757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.547009  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.049327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.547666  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0920 04:43:44.564906  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.152467ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.566584  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.131325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.585144  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.585186  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.585277  108295 httplog.go:90] GET /healthz: (1.223889ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:44.586393  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.56783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.586668  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0920 04:43:44.594346  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.594396  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.594441  108295 httplog.go:90] GET /healthz: (1.726878ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.605010  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.206129ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.606996  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.308952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.626168  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.42098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.626480  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0920 04:43:44.645008  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.247353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.646903  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.174989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.666479  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.638192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.666724  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0920 04:43:44.684926  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.205751ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.685055  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.685424  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.685663  108295 httplog.go:90] GET /healthz: (1.516972ms) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:44.687240  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.297461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.693549  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.693575  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.693620  108295 httplog.go:90] GET /healthz: (1.074834ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.706346  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.450934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.706650  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0920 04:43:44.725134  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.326771ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.727545  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.764389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.745887  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.171307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.746150  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0920 04:43:44.751303  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:44.751898  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:44.752119  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:44.753442  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:44.754488  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:44.756237  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:44.756440  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:44.764719  108295 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.044044ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.766426  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.229316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.785209  108295 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0920 04:43:44.785237  108295 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0920 04:43:44.785242  108295 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.635058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.785276  108295 httplog.go:90] GET /healthz: (906.113µs) 0 [Go-http-client/1.1 127.0.0.1:38922]
I0920 04:43:44.785431  108295 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0920 04:43:44.793068  108295 httplog.go:90] GET /healthz: (626.834µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.794546  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.054711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.796373  108295 httplog.go:90] POST /api/v1/namespaces: (1.374322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.797588  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (806.653µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.801196  108295 httplog.go:90] POST /api/v1/namespaces/default/services: (3.109664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.802324  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (735.556µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.804202  108295 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.482599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.884971  108295 httplog.go:90] GET /healthz: (825.673µs) 200 [Go-http-client/1.1 127.0.0.1:38922]
W0920 04:43:44.886583  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886621  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886654  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886661  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886707  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886733  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886743  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886770  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886785  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886805  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886819  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:44.886871  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:43:44.886909  108295 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0920 04:43:44.886920  108295 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0920 04:43:44.887083  108295 shared_informer.go:197] Waiting for caches to sync for scheduler
I0920 04:43:44.887439  108295 reflector.go:118] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 04:43:44.887476  108295 reflector.go:153] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:232
I0920 04:43:44.888390  108295 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (517.257µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38922]
I0920 04:43:44.889188  108295 get.go:251] Starting watch for /api/v1/pods, rv=59298 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m4s
I0920 04:43:44.987300  108295 shared_informer.go:227] caches populated
I0920 04:43:44.987333  108295 shared_informer.go:204] Caches are synced for scheduler 
I0920 04:43:44.987729  108295 reflector.go:118] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.987871  108295 reflector.go:153] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.987751  108295 reflector.go:118] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.988029  108295 reflector.go:153] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.988061  108295 reflector.go:118] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.988076  108295 reflector.go:153] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.987808  108295 reflector.go:118] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.988192  108295 reflector.go:153] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.987939  108295 reflector.go:118] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.988436  108295 reflector.go:153] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.987938  108295 reflector.go:118] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.988744  108295 reflector.go:153] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.989085  108295 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (733.765µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.989222  108295 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (389.029µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38938]
I0920 04:43:44.989161  108295 reflector.go:118] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.989281  108295 reflector.go:153] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.989290  108295 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (508.071µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38930]
I0920 04:43:44.989333  108295 reflector.go:118] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.989347  108295 reflector.go:153] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.989384  108295 reflector.go:118] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.989403  108295 reflector.go:153] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.989791  108295 reflector.go:118] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.989807  108295 reflector.go:153] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0920 04:43:44.989855  108295 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (483.502µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38936]
I0920 04:43:44.989883  108295 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (1.070788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38932]
I0920 04:43:44.990831  108295 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (861.562µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38930]
I0920 04:43:44.991097  108295 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=59298 labels= fields= timeout=6m59s
I0920 04:43:44.991236  108295 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=59298 labels= fields= timeout=9m28s
I0920 04:43:44.991294  108295 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (544.534µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38924]
I0920 04:43:44.991303  108295 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (387.124µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38938]
I0920 04:43:44.991517  108295 get.go:251] Starting watch for /api/v1/nodes, rv=59298 labels= fields= timeout=6m37s
I0920 04:43:44.991261  108295 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (667.441µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38932]
I0920 04:43:44.991812  108295 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=59298 labels= fields= timeout=8m36s
I0920 04:43:44.991852  108295 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (366.745µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38948]
I0920 04:43:44.991913  108295 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=59298 labels= fields= timeout=6m23s
I0920 04:43:44.992076  108295 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=59298 labels= fields= timeout=8m17s
I0920 04:43:44.992440  108295 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=59298 labels= fields= timeout=8m30s
I0920 04:43:44.992477  108295 get.go:251] Starting watch for /api/v1/services, rv=59536 labels= fields= timeout=6m15s
I0920 04:43:44.992605  108295 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=59298 labels= fields= timeout=9m35s
I0920 04:43:44.993327  108295 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=59298 labels= fields= timeout=9m50s
I0920 04:43:45.087656  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087695  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087700  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087704  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087708  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087713  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087717  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087721  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087725  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087788  108295 shared_informer.go:227] caches populated
I0920 04:43:45.087805  108295 shared_informer.go:227] caches populated
I0920 04:43:45.088013  108295 node_lifecycle_controller.go:327] Sending events to api server.
I0920 04:43:45.088090  108295 node_lifecycle_controller.go:359] Controller is using taint based evictions.
W0920 04:43:45.088135  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:43:45.088229  108295 taint_manager.go:162] Sending events to api server.
I0920 04:43:45.088361  108295 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 04:43:45.088384  108295 node_lifecycle_controller.go:465] Controller will taint node by condition.
W0920 04:43:45.088395  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0920 04:43:45.088419  108295 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0920 04:43:45.088602  108295 node_lifecycle_controller.go:488] Starting node controller
I0920 04:43:45.088634  108295 shared_informer.go:197] Waiting for caches to sync for taint
I0920 04:43:45.091930  108295 httplog.go:90] POST /api/v1/namespaces: (2.580417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38962]
I0920 04:43:45.092318  108295 node_lifecycle_controller.go:327] Sending events to api server.
I0920 04:43:45.092481  108295 node_lifecycle_controller.go:359] Controller is using taint based evictions.
I0920 04:43:45.092736  108295 taint_manager.go:162] Sending events to api server.
I0920 04:43:45.092880  108295 node_lifecycle_controller.go:453] Controller will reconcile labels.
I0920 04:43:45.092980  108295 node_lifecycle_controller.go:465] Controller will taint node by condition.
I0920 04:43:45.093145  108295 node_lifecycle_controller.go:488] Starting node controller
I0920 04:43:45.093239  108295 shared_informer.go:197] Waiting for caches to sync for taint
I0920 04:43:45.093383  108295 reflector.go:118] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:45.093427  108295 reflector.go:153] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I0920 04:43:45.094710  108295 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (728.205µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38962]
I0920 04:43:45.095715  108295 get.go:251] Starting watch for /api/v1/namespaces, rv=59538 labels= fields= timeout=9m37s
I0920 04:43:45.193490  108295 shared_informer.go:227] caches populated
I0920 04:43:45.193552  108295 shared_informer.go:227] caches populated
I0920 04:43:45.193793  108295 reflector.go:118] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:45.193827  108295 reflector.go:153] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0920 04:43:45.193796  108295 reflector.go:118] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:45.193950  108295 reflector.go:153] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I0920 04:43:45.195182  108295 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (617.302µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38966]
I0920 04:43:45.195184  108295 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (607.333µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38964]
I0920 04:43:45.195792  108295 reflector.go:118] Starting reflector *v1beta1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I0920 04:43:45.196009  108295 reflector.go:153] Listing and watching *v1beta1.Lease from k8s.io/client-go/informers/factory.go:134
I0920 04:43:45.196298  108295 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=59298 labels= fields= timeout=8m19s
I0920 04:43:45.196429  108295 get.go:251] Starting watch for /api/v1/pods, rv=59298 labels= fields= timeout=6m37s
I0920 04:43:45.197224  108295 httplog.go:90] GET /apis/coordination.k8s.io/v1beta1/leases?limit=500&resourceVersion=0: (356.017µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38970]
I0920 04:43:45.198073  108295 get.go:251] Starting watch for /apis/coordination.k8s.io/v1beta1/leases, rv=59298 labels= fields= timeout=6m7s
I0920 04:43:45.288917  108295 shared_informer.go:227] caches populated
I0920 04:43:45.288955  108295 shared_informer.go:204] Caches are synced for taint 
I0920 04:43:45.289188  108295 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 04:43:45.293507  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293546  108295 shared_informer.go:204] Caches are synced for taint 
I0920 04:43:45.293627  108295 taint_manager.go:186] Starting NoExecuteTaintManager
I0920 04:43:45.293722  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293743  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293751  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293757  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293763  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293769  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293775  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293781  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293792  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293799  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293805  108295 shared_informer.go:227] caches populated
I0920 04:43:45.293812  108295 shared_informer.go:227] caches populated
I0920 04:43:45.297476  108295 httplog.go:90] POST /api/v1/nodes: (2.523315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38972]
I0920 04:43:45.298285  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:43:45.298411  108295 taint_manager.go:438] Updating known taints on node node-0: []
I0920 04:43:45.298688  108295 node_tree.go:93] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I0920 04:43:45.298686  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:43:45.298735  108295 taint_manager.go:438] Updating known taints on node node-0: []
I0920 04:43:45.299655  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (608.243µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:45.299750  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (794.238µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.300897  108295 httplog.go:90] POST /api/v1/nodes: (2.850687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38972]
I0920 04:43:45.301633  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:43:45.301674  108295 taint_manager.go:438] Updating known taints on node node-1: []
I0920 04:43:45.301718  108295 node_tree.go:93] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I0920 04:43:45.301797  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I0920 04:43:45.301805  108295 taint_manager.go:438] Updating known taints on node node-1: []
I0920 04:43:45.303180  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (386.62µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.305239  108295 httplog.go:90] POST /api/v1/nodes: (3.871853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38972]
I0920 04:43:45.305910  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:43:45.305941  108295 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:43:45.305987  108295 node_tree.go:93] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I0920 04:43:45.307422  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:43:45.307443  108295 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:43:45.308029  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.894755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.308279  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.301556893 +0000 UTC m=+286.094598152,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.301557145 +0000 UTC m=+286.094598391,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.301557418 +0000 UTC m=+286.094598653,}] Taint to Node node-1
I0920 04:43:45.308326  108295 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 04:43:45.309222  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.428203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38972]
I0920 04:43:45.309402  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.298407738 +0000 UTC m=+286.091448994,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.298407915 +0000 UTC m=+286.091449145,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.298408034 +0000 UTC m=+286.091449263,}] Taint to Node node-0
I0920 04:43:45.309440  108295 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 04:43:45.310119  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (395.472µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38972]
I0920 04:43:45.311213  108295 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/pods: (2.265026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.311824  108295 scheduling_queue.go:830] About to try and schedule pod taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0
I0920 04:43:45.311842  108295 scheduler.go:530] Attempting to schedule pod: taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0
I0920 04:43:45.311858  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (943.654µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38972]
I0920 04:43:45.311973  108295 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0", node "node-2"
I0920 04:43:45.311986  108295 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0", node "node-2": all PVCs bound and nothing to do
I0920 04:43:45.312037  108295 factory.go:606] Attempting to bind testpod-0 to node-2
I0920 04:43:45.312476  108295 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0"}
I0920 04:43:45.312563  108295 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0"}
I0920 04:43:45.313500  108295 store.go:362] GuaranteedUpdate of /440ab346-95db-43ac-9500-94e7a7e0cd5f/minions/node-0 failed because of a conflict, going to retry
I0920 04:43:45.314997  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (14.047892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:45.317147  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.298342125 +0000 UTC m=+286.091383377,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.298342282 +0000 UTC m=+286.091383512,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.298342398 +0000 UTC m=+286.091383628,}] Taint to Node node-0
I0920 04:43:45.317219  108295 controller_utils.go:216] Made sure that Node node-0 has no [] Taint
I0920 04:43:45.319850  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (6.384811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.321290  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.305855296 +0000 UTC m=+286.098896552,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.305855694 +0000 UTC m=+286.098896921,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.305855828 +0000 UTC m=+286.098897061,}] Taint to Node node-2
I0920 04:43:45.321349  108295 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 04:43:45.321927  108295 store.go:362] GuaranteedUpdate of /440ab346-95db-43ac-9500-94e7a7e0cd5f/minions/node-2 failed because of a conflict, going to retry
I0920 04:43:45.322974  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (5.593736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:45.323946  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.306043154 +0000 UTC m=+286.099084407,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.306043371 +0000 UTC m=+286.099084604,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.306043615 +0000 UTC m=+286.099084857,}] Taint to Node node-2
I0920 04:43:45.324087  108295 controller_utils.go:216] Made sure that Node node-2 has no [] Taint
I0920 04:43:45.326595  108295 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/pods/testpod-0/binding: (2.232248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38972]
I0920 04:43:45.328335  108295 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0"}
I0920 04:43:45.328494  108295 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0"}
I0920 04:43:45.328745  108295 scheduler.go:662] pod taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 is bound successfully on node "node-2", 3 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I0920 04:43:45.331231  108295 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/events: (1.898442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:45.335311  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (582.007µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38978]
I0920 04:43:45.339864  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.209543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:45.340743  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.301776671 +0000 UTC m=+286.094817925,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.301776843 +0000 UTC m=+286.094818068,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.30177706 +0000 UTC m=+286.094818291,}] Taint to Node node-1
I0920 04:43:45.340850  108295 controller_utils.go:216] Made sure that Node node-1 has no [] Taint
I0920 04:43:45.414037  108295 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/pods/testpod-0: (1.384286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:45.415351  108295 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/pods/testpod-0: (853.745µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:45.416956  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.199384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:45.419108  108295 httplog.go:90] PUT /api/v1/nodes/node-2/status: (1.69045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:45.420123  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (473.355µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.420504  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (343.04µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39000]
I0920 04:43:45.423034  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.194372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.423352  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.419426924 +0000 UTC m=+286.212468172,}] Taint to Node node-2
I0920 04:43:45.423388  108295 store.go:362] GuaranteedUpdate of /440ab346-95db-43ac-9500-94e7a7e0cd5f/minions/node-2 failed because of a conflict, going to retry
I0920 04:43:45.423976  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (369.471µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.425254  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.642725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39000]
I0920 04:43:45.425539  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45.419589597 +0000 UTC m=+286.212630859,}] Taint to Node node-2
I0920 04:43:45.426051  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (341.765µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39000]
I0920 04:43:45.428198  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.271135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.428292  108295 store.go:362] GuaranteedUpdate of /440ab346-95db-43ac-9500-94e7a7e0cd5f/minions/node-2 failed because of a conflict, going to retry
I0920 04:43:45.428409  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:45.429221  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.357833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39000]
I0920 04:43:45.429588  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:45.521751  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.40137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.623116  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.720899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.722062  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.699958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.751952  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.752317  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.752317  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.753622  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.754733  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.756438  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.756810  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.828091  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.46865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.921945  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.661948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:45.989983  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.990439  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.990905  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.991089  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.992232  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:45.992888  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.023385  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.141524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.121752  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.466176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.195873  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.222046  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.626794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.321948  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.718745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.424314  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.209381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.524519  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.057902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.622556  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.213536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.726053  108295 httplog.go:90] GET /api/v1/nodes/node-2: (5.453084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.752086  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.752514  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.752547  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.753774  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.755027  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.756822  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.757044  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.821916  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.653675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.924965  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.632424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:46.990216  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.990615  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.991057  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.991234  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.992434  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:46.993039  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.021624  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.435179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.121780  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.535435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.196114  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.221863  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.595692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.321898  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.622089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.421718  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.462719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.521541  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.36911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.621872  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.600649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.721851  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.612155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.752325  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.752665  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.752715  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.753876  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.755252  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.757042  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.757260  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.821683  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.465071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.921687  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.447019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:47.990435  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.990773  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.991225  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.991406  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.993092  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:47.993169  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.022040  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.841192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.121775  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.533662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.196310  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.221625  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.421403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.321865  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.592172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.422033  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.712749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.522276  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.994419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.621817  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.626196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.652310  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.503596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:48.653744  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (838.437µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:48.654910  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (770.62µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:48.721647  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.360271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.752520  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.752879  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.752897  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.754028  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.755473  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.757168  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.757396  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.821893  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.647619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.923055  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.195706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:48.990777  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.990987  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.991376  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.991572  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.993238  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:48.993347  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.021879  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.585891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.121436  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.239489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.196531  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.221946  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.65948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.326019  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.922002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.422884  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.525435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.521655  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.435009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.621892  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.471503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.721841  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.469144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.752726  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.753027  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.753030  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.754174  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.755633  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.757325  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.757528  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.821930  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.605382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.922053  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.69579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:49.991142  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.991147  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.991632  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.991763  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.993413  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:49.993415  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.022433  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.046369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.122095  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.7539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.196661  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.222085  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.587196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.289671  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 04:43:50.289700  108295 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 04:43:50.289753  108295 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 04:43:50.289772  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 04:43:50.289777  108295 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
I0920 04:43:50.289785  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 04:43:50.289789  108295 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
W0920 04:43:50.289828  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 04:43:50.289887  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
W0920 04:43:50.289913  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 04:43:50.289940  108295 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 04:43:50.289918226 +0000 UTC m=+291.082959464. Adding it to the Taint queue.
I0920 04:43:50.289970  108295 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 04:43:50.289934  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"9ba84dd9-e381-4cec-9a67-501507504919", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I0920 04:43:50.290060  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"22201c93-c4bc-4a1c-8382-f7684fa34cab", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 04:43:50.290114  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"77b5f3c2-f096-47b5-be9d-a92dad7783e9", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 04:43:50.292048  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.878252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.293626  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.181874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.293727  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-2"
I0920 04:43:50.293744  108295 controller_utils.go:168] Recording Registered Node node-2 in Controller event message for node node-2
I0920 04:43:50.293788  108295 node_lifecycle_controller.go:1244] Initializing eviction metric for zone: region1:�:zone1
I0920 04:43:50.293814  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-0"
I0920 04:43:50.293807  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"77b5f3c2-f096-47b5-be9d-a92dad7783e9", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I0920 04:43:50.293821  108295 controller_utils.go:168] Recording Registered Node node-0 in Controller event message for node node-0
I0920 04:43:50.293841  108295 node_lifecycle_controller.go:706] Controller observed a new Node: "node-1"
I0920 04:43:50.293847  108295 controller_utils.go:168] Recording Registered Node node-1 in Controller event message for node node-1
W0920 04:43:50.293872  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-2. Assuming now as a timestamp.
I0920 04:43:50.293904  108295 node_lifecycle_controller.go:770] Node node-2 is NotReady as of 2019-09-20 04:43:50.293890444 +0000 UTC m=+291.086931684. Adding it to the Taint queue.
I0920 04:43:50.293927  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"9ba84dd9-e381-4cec-9a67-501507504919", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
W0920 04:43:50.293935  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-0. Assuming now as a timestamp.
W0920 04:43:50.293978  108295 node_lifecycle_controller.go:940] Missing timestamp for Node node-1. Assuming now as a timestamp.
I0920 04:43:50.294012  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"22201c93-c4bc-4a1c-8382-f7684fa34cab", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I0920 04:43:50.294016  108295 node_lifecycle_controller.go:1144] Controller detected that zone region1:�:zone1 is now in state Normal.
I0920 04:43:50.295091  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.033606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.295379  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.065311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:50.296487  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.044807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.298006  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.091444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.301607  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (375.065µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.304041  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.665974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.304304  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:43:50.301021029 +0000 UTC m=+291.094062254,}] Taint to Node node-2
I0920 04:43:50.304410  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:43:50.304484  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:43:50.304653  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:43:50 +0000 UTC}]
I0920 04:43:50.304836  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 at 2019-09-20 04:43:50.3048171 +0000 UTC m=+291.097858349 to be fired at 2019-09-20 04:47:10.3048171 +0000 UTC m=+491.097858349
I0920 04:43:50.304509  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:43:50.305005  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/not-ready  NoExecute 2019-09-20 04:43:50 +0000 UTC}]
I0920 04:43:50.305102  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 at 2019-09-20 04:43:50.305094324 +0000 UTC m=+291.098135576 to be fired at 2019-09-20 04:47:10.305094324 +0000 UTC m=+491.098135576
I0920 04:43:50.306940  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (369.927µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.309282  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (1.719274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.309588  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:43:50.306369055 +0000 UTC m=+291.099410282,}] Taint to Node node-2
I0920 04:43:50.309714  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:43:50.321498  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.273039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.422118  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.813582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.521856  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.60112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.628111  108295 httplog.go:90] GET /api/v1/nodes/node-2: (6.150339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.724872  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.562222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.753690  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.756002  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.756651  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.756687  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.756716  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.759196  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.760231  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.821676  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.460089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.922124  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.812058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:50.991338  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.991338  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.991733  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.991896  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.993577  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:50.993589  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.021737  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.509251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.121563  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.361021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.196849  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.221722  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.522736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.321750  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.572654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.422359  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.968977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.522885  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.541713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.621437  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.228386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.722010  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.648484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.754213  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.756216  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.756757  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.756758  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.756953  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.759497  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.760419  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.821363  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.159928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.921729  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.503705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:51.991763  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.991770  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.992109  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.992283  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.993847  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:51.993874  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.022228  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.848116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.122410  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.726493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.197078  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.221727  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.503727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.321899  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.584485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.421630  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.442247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.521625  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.45801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.622391  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.065401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.725593  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.511153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.754498  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.756417  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.756897  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.756916  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.757083  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.760716  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.760716  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.828775  108295 httplog.go:90] GET /api/v1/nodes/node-2: (8.398171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.925597  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.072444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:52.992039  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.992061  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.992442  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.992506  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.994049  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:52.994186  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.021942  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.688544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.122867  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.292868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.197315  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.221907  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.590878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.322208  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.884485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.422816  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.229401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.523905  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.930892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.621930  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.723164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.721957  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.70017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.754685  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.756633  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.757083  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.757215  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.757255  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.761544  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.761572  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.822360  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.955888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.921906  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.631748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:53.992249  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.992302  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.992668  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.992757  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.994251  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:53.994411  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.021839  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.606297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.121949  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.682654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.197668  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.224356  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.980537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.324053  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.74022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.424343  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.448533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.523177  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.988174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.621941  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.613965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.723227  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.836221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.754909  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.756862  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.757253  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.757400  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.757427  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.761790  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.761796  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.795594  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.810292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.796998  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.011132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.798582  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.123343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.822957  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.286202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.922548  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.240152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:54.992430  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.992430  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.992798  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.992827  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.994409  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:54.994527  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.021817  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.59312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:55.121659  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.449455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:55.197897  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.221720  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.484076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:55.290253  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.000327407s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.290320  108295 node_lifecycle_controller.go:1012] Condition MemoryPressure of node node-2 was never updated by kubelet
I0920 04:43:55.290331  108295 node_lifecycle_controller.go:1012] Condition DiskPressure of node node-2 was never updated by kubelet
I0920 04:43:55.290338  108295 node_lifecycle_controller.go:1012] Condition PIDPressure of node node-2 was never updated by kubelet
I0920 04:43:55.293084  108295 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.281259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:55.293487  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.003646035s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.293532  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.003695225s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.293560  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.003716611s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.293571  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.003736206s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.294089  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (531.19µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:55.294227  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (533.251µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:55.294270  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.00031898s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.294378  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000424673s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.294502  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000550587s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.294623  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.000672266s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.297244  108295 store.go:362] GuaranteedUpdate of /440ab346-95db-43ac-9500-94e7a7e0cd5f/minions/node-2 failed because of a conflict, going to retry
I0920 04:43:55.297260  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (311.795µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39016]
I0920 04:43:55.297260  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (359.232µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39014]
I0920 04:43:55.297572  108295 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.42974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39010]
I0920 04:43:55.297599  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.737806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38976]
I0920 04:43:55.297844  108295 controller_utils.go:180] Recording status change NodeNotReady event message for node node-0
I0920 04:43:55.297859  108295 controller_utils.go:124] Update ready status of pods on node [node-0]
I0920 04:43:55.297873  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:55.293340569 +0000 UTC m=+296.086381796,}] Taint to Node node-2
I0920 04:43:55.298116  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"9ba84dd9-e381-4cec-9a67-501507504919", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-0 status is now: NodeNotReady
I0920 04:43:55.298169  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.720744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39012]
I0920 04:43:55.297247  108295 store.go:362] GuaranteedUpdate of /440ab346-95db-43ac-9500-94e7a7e0cd5f/minions/node-0 failed because of a conflict, going to retry
I0920 04:43:55.298357  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:55.293428413 +0000 UTC m=+296.086469658,}] Taint to Node node-2
I0920 04:43:55.298407  108295 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.501107ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:55.298373  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (329.968µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39014]
E0920 04:43:55.298705  108295 node_lifecycle_controller.go:1037] Error updating node node-0: Operation cannot be fulfilled on nodes "node-0": the object has been modified; please apply your changes to the latest version and try again
I0920 04:43:55.299033  108295 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-0: (973.3µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39016]
I0920 04:43:55.299199  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.009296758s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.299233  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.009333061s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.299246  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.009346318s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.299256  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.009356806s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.299273  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (680.015µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39012]
I0920 04:43:55.301660  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (3.363351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39020]
I0920 04:43:55.301775  108295 store.go:362] GuaranteedUpdate of /440ab346-95db-43ac-9500-94e7a7e0cd5f/minions/node-2 failed because of a conflict, going to retry
I0920 04:43:55.301884  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.548743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39018]
I0920 04:43:55.301884  108295 httplog.go:90] GET /api/v1/nodes/node-0: (3.031249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38974]
I0920 04:43:55.302116  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:55.296608792 +0000 UTC m=+296.089650043,}] Taint to Node node-0
I0920 04:43:55.302224  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.633808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39016]
I0920 04:43:55.302325  108295 store.go:362] GuaranteedUpdate of /440ab346-95db-43ac-9500-94e7a7e0cd5f/minions/node-0 failed because of a conflict, going to retry
I0920 04:43:55.302516  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (282.678µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39018]
I0920 04:43:55.302512  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:55.302560  108295 controller_utils.go:204] Added [] Taint to Node node-2
I0920 04:43:55.302821  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.964088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39014]
I0920 04:43:55.303096  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:55.303232  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (524.299µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39016]
I0920 04:43:55.303275  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.39829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39012]
I0920 04:43:55.303570  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:55.303694  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:55.296563995 +0000 UTC m=+296.089605261,}] Taint to Node node-0
I0920 04:43:55.305132  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (221.659µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39012]
I0920 04:43:55.305248  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.037328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39014]
I0920 04:43:55.305614  108295 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:55.305682  108295 controller_utils.go:204] Added [] Taint to Node node-0
I0920 04:43:55.305889  108295 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.559533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39020]
I0920 04:43:55.306124  108295 controller_utils.go:180] Recording status change NodeNotReady event message for node node-1
I0920 04:43:55.306151  108295 controller_utils.go:124] Update ready status of pods on node [node-1]
I0920 04:43:55.306264  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (329.446µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39012]
I0920 04:43:55.306535  108295 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"22201c93-c4bc-4a1c-8382-f7684fa34cab", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I0920 04:43:55.306617  108295 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:55.306676  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (374.62µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39020]
I0920 04:43:55.307656  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (871.436µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39012]
I0920 04:43:55.307721  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (1.76678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39016]
I0920 04:43:55.307780  108295 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (1.273003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39022]
I0920 04:43:55.307955  108295 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 04:43:55.308020  108295 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:55.308382  108295 httplog.go:90] POST /api/v1/namespaces/default/events: (1.605657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.308611  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (443.549µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39016]
I0920 04:43:55.309373  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.106126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39020]
I0920 04:43:55.309855  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:55.306163072 +0000 UTC m=+296.099204325,}] Taint to Node node-1
I0920 04:43:55.310490  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (390.01µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39020]
I0920 04:43:55.310752  108295 store.go:362] GuaranteedUpdate of /440ab346-95db-43ac-9500-94e7a7e0cd5f/minions/node-1 failed because of a conflict, going to retry
I0920 04:43:55.311485  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (2.507352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.311783  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:55.306237756 +0000 UTC m=+296.099279007,}] Taint to Node node-1
I0920 04:43:55.312270  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (358.515µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.312945  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.629216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39020]
I0920 04:43:55.313012  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.213382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39016]
I0920 04:43:55.313320  108295 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:55.313369  108295 controller_utils.go:204] Added [] Taint to Node node-1
I0920 04:43:55.313903  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:43:55.313903  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:43:55.313922  108295 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:43:55.313932  108295 taint_manager.go:438] Updating known taints on node node-2: []
I0920 04:43:55.313946  108295 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 04:43:55.313946  108295 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I0920 04:43:55.313957  108295 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 at 2019-09-20 04:43:55.313953952 +0000 UTC m=+296.106995200
I0920 04:43:55.313961  108295 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 at 2019-09-20 04:43:55.313958579 +0000 UTC m=+296.106999832
I0920 04:43:55.314009  108295 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0
I0920 04:43:55.314027  108295 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0
I0920 04:43:55.315357  108295 httplog.go:90] PATCH /api/v1/nodes/node-1: (1.977686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.315578  108295 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:55.315769  108295 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/events: (1.387606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:43:55.315769  108295 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/events: (1.369112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39030]
I0920 04:43:55.315950  108295 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (2.39317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39024]
I0920 04:43:55.316198  108295 controller_utils.go:216] Made sure that Node node-1 has no [&Taint{Key:node.kubernetes.io/memory-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/disk-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,} &Taint{Key:node.kubernetes.io/pid-pressure,Value:,Effect:NoSchedule,TimeAdded:2019-09-20 04:43:45 +0000 UTC,}] Taint
I0920 04:43:55.321053  108295 httplog.go:90] GET /api/v1/nodes/node-2: (947.702µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:43:55.322371  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.028417903s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:43:55.322409  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.028461156s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:43:55.322424  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.028476362s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:43:55.322435  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 5.02848734s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:43:55.322509  108295 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:43:55.322495547 +0000 UTC m=+296.115536877. Adding it to the Taint queue.
I0920 04:43:55.322539  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.028547944s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.322551  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.028560553s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.322561  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.028570632s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.322571  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.028579965s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I0920 04:43:55.324089  108295 httplog.go:90] PUT /api/v1/nodes/node-1/status: (1.293303ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
E0920 04:43:55.324314  108295 node_lifecycle_controller.go:1037] Error updating node node-1: Operation cannot be fulfilled on nodes "node-1": the object has been modified; please apply your changes to the latest version and try again
I0920 04:43:55.324603  108295 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (359.858µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.325650  108295 httplog.go:90] GET /api/v1/nodes/node-1: (1.186947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:43:55.327433  108295 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.074876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.327723  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:43:55.324098352 +0000 UTC m=+296.117139579,}] Taint to Node node-0
I0920 04:43:55.327755  108295 controller_utils.go:216] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:43:55.328343  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:43:55.328364  108295 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:43:55 +0000 UTC}]
I0920 04:43:55.328441  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I0920 04:43:55.328475  108295 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:43:55 +0000 UTC}]
I0920 04:43:55.346269  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.05226655s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:43:55.346330  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.052337881s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:43:55.346347  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.052355545s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:43:55.346358  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 5.052366887s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:43:55.346419  108295 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:43:55.346403011 +0000 UTC m=+296.139444255. Adding it to the Taint queue.
I0920 04:43:55.346448  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.052565276s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:43:55.346495  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.052608377s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:43:55.346511  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.052627475s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:43:55.346525  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 5.052640987s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:43:55.347835  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (887.972µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.352059  108295 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.937569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.352426  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:43:55.352434  108295 controller_utils.go:204] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-09-20 04:43:55.346551238 +0000 UTC m=+296.139592493,}] Taint to Node node-2
I0920 04:43:55.352475  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:43:55 +0000 UTC}]
I0920 04:43:55.352527  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 at 2019-09-20 04:43:55.352517885 +0000 UTC m=+296.145559137 to be fired at 2019-09-20 04:48:55.352517885 +0000 UTC m=+596.145559137
I0920 04:43:55.352581  108295 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I0920 04:43:55.352679  108295 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-09-20 04:43:55 +0000 UTC}]
I0920 04:43:55.352839  108295 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 at 2019-09-20 04:43:55.352826613 +0000 UTC m=+296.145867865 to be fired at 2019-09-20 04:48:55.352826613 +0000 UTC m=+596.145867865
I0920 04:43:55.353302  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (632.292µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.353745  108295 controller_utils.go:216] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0920 04:43:55.353839  108295 node_lifecycle_controller.go:1094] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0920 04:43:55.354275  108295 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (298.193µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.363587  108295 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.376869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:55.364848  108295 httplog.go:90] GET /api/v1/namespaces/kube-public: (919.878µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:55.366269  108295 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.034064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:55.421382  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.24465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.521531  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.336169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.623010  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.20108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.721710  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.48805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.757190  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.757479  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.757628  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.758367  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.758372  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.762072  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.762079  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.823288  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.49951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.921871  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.616859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:55.992647  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.992868  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.992647  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.992957  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.994627  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:55.994663  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.022295  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.840593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.123359  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.091422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.198246  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.224270  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.541995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.325892  108295 httplog.go:90] GET /api/v1/nodes/node-2: (5.218643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.421817  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.629505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.521818  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.550409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.622621  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.245993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.722398  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.864127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.757422  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.757686  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.757879  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.758529  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.758552  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.762265  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.762267  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.821863  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.548023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.921841  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.545461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:56.992987  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.993108  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.993130  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.993160  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.994810  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:56.994840  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.021429  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.214334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.122086  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.77639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.198507  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.222870  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.521614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.321695  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.480721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.421834  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.651013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.521860  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.615125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.621828  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.6191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.722247  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.933753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.757654  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.757818  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.758051  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.758691  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.758708  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.762436  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.762491  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.821575  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.335746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.921857  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.538837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:57.993231  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.993305  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.993231  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.993249  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.994968  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:57.994976  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.021889  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.665579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.121668  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.529234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.198731  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.221678  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.449295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.321600  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.439989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.421663  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.481771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.522411  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.081157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.621803  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.592975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.653156  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.905259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:58.654834  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.154295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:58.656322  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.055646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:43:58.721506  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.29229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.758073  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.758114  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.758320  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.758795  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.758918  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.762630  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.762649  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.821919  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.650375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.921943  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.659583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:58.993529  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.993564  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.993910  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.994394  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.995264  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:58.995265  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.028375  108295 httplog.go:90] GET /api/v1/nodes/node-2: (6.957389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.121694  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.518679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.199071  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.221673  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.395133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.322028  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.787746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.421892  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.615509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.521926  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.717992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.621808  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.541335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.721925  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.627297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.758272  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.758286  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.758484  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.760543  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.760598  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.762775  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.762804  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.821971  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.636481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.921384  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.201952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:43:59.993862  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.993862  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.993974  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.994758  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.995435  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:43:59.995562  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.022424  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.879522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.121653  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.48476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.199294  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.222027  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.812363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.313538  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.023689916s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.313605  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.02376859s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.313626  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.023790034s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.313642  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.023806436s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.313701  108295 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:44:00.313686085 +0000 UTC m=+301.106727335. Adding it to the Taint queue.
I0920 04:44:00.313731  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.023830116s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.313748  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.023846816s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.313763  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.023862448s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.313777  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.023877151s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.313807  108295 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:44:00.31379665 +0000 UTC m=+301.106837914. Adding it to the Taint queue.
I0920 04:44:00.313834  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.023917333s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.313844  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.023927244s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:00.313854  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.023937296s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:00.313871  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.023953515s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:00.313891  108295 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 04:44:00.313883968 +0000 UTC m=+301.106925211. Adding it to the Taint queue.
I0920 04:44:00.322061  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.808816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.354945  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.060983466s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.355142  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.061192076s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.355199  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.06125024s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.355269  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 10.061319046s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.355388  108295 node_lifecycle_controller.go:796] Node node-0 is unresponsive as of 2019-09-20 04:44:00.355366597 +0000 UTC m=+301.148407848. Adding it to the Taint queue.
I0920 04:44:00.355504  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.061510963s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.355574  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.061581904s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.355615  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.061622965s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.355699  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 10.061699791s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.355798  108295 node_lifecycle_controller.go:796] Node node-1 is unresponsive as of 2019-09-20 04:44:00.355782105 +0000 UTC m=+301.148823356. Adding it to the Taint queue.
I0920 04:44:00.355872  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.061988236s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:00.355931  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.0620462s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:00.355966  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.062082759s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:00.356011  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 10.06212689s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:00.356068  108295 node_lifecycle_controller.go:796] Node node-2 is unresponsive as of 2019-09-20 04:44:00.356056419 +0000 UTC m=+301.149097671. Adding it to the Taint queue.
I0920 04:44:00.422315  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.049791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.521698  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.436635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.621788  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.541525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.722845  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.621627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.758657  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.758755  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.758888  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.760805  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.760936  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.763003  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.763099  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.822336  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.02663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.921750  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.562716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:00.994044  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.994242  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.994044  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.994907  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.995626  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:00.995634  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.021404  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.16282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.122496  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.097557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.199510  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.221601  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.395619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.321715  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.516976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.421768  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.525627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.521718  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.438842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.622085  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.802506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.721833  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.480664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.758852  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.758942  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.759036  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.761112  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.761115  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.763283  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.763437  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.823179  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.345793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.921841  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.558575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:01.994255  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.994486  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.994928  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.995060  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.995781  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:01.995787  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.021778  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.543392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.121918  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.629777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.199806  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.221797  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.530637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.321941  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.670192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.421584  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.399849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.522349  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.118298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.622087  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.366549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.722072  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.664766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.759216  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.759217  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.759305  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.761440  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.761616  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.763517  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.764114  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.822194  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.806763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.922091  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.725442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:02.994433  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.994721  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.995109  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.995208  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.995937  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:02.995942  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.022122  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.576258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.121917  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.626862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.200093  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.222261  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.855772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.321486  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.298245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.421843  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.613781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.521721  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.521392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.621777  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.594118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.721760  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.526317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.759503  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.759520  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.759522  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.761687  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.761811  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.763730  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.764349  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.821842  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.552071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.922171  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.836602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:03.994691  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.994876  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.995243  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.995323  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.996088  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:03.996089  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.022225  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.843103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.121846  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.569176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.200318  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.221718  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.506089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.321676  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.468294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.421633  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.431527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.521747  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.575618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.621943  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.661256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.721959  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.71096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.759759  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.759810  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.759777  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.761893  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.761986  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.763896  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.764656  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.795538  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.568167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.797068  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.094377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.798274  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (868.204µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.821816  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.538023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.922010  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.721018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:04.994861  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.995110  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.995364  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.995478  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.996214  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:04.996241  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.021835  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.555976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.121909  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.476495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.200560  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.222240  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.837133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.314199  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.02434998s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.314558  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.024718983s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.314664  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.02482666s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.314771  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.024933731s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.314938  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.025037349s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.315016  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.025114493s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.315087  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.025185802s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.315158  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.025254939s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.315283  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.025365165s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.315352  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.02543324s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:05.315430  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.025510814s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:05.315515  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.025596487s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:05.322043  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.743826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.356359  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.062393609s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.356421  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.062471869s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.356438  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.062490453s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.356449  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 15.062501865s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.356561  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.062570123s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.356573  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.062582366s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.356583  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.062592509s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.356595  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 15.06260452s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.356626  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.062742968s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:05.356637  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.062753728s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:05.356648  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.062765353s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:05.356658  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 15.062775376s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:05.424979  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.709711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.521766  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.531954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.622113  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.786064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.721984  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.676798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.759896  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.759961  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.760089  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.762066  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.762067  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.764102  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.764853  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.822136  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.856948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.921987  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.645875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:05.995079  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.995338  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.995530  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.995609  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.996360  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:05.996504  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.021810  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.568956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.122234  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.922403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.200802  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.223567  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.813794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.321887  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.628365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.421836  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.608375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.521767  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.530024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.622586  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.157811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.721895  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.626337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.760115  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.760172  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.760257  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.762292  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.762343  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.764280  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.765008  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.821631  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.369969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.921655  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.418577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:06.995293  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.995570  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.995694  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.995826  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.996529  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:06.996816  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.024603  108295 httplog.go:90] GET /api/v1/nodes/node-2: (4.260109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.122333  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.605263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.201025  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.221767  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.550627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.321513  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.282128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.421660  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.464571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.522078  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.885791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.622145  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.844412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.724782  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.30769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.760313  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.760324  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.760340  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.762805  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.762921  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.764968  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.765269  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.824346  108295 httplog.go:90] GET /api/v1/nodes/node-2: (3.529042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.922244  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.975548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:07.995692  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.996056  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.995778  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.995789  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.996655  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:07.996982  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.022570  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.131506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.121529  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.3154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.201313  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.221730  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.42338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.321873  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.67013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.421781  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.578291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.521870  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.642893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.621930  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.680877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.652656  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.449804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:44:08.654083  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (999.726µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:44:08.655770  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.110084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0920 04:44:08.722647  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.068533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.760563  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.760563  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.760581  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.763383  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.763381  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.765198  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.765492  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.822007  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.671676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.921686  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.425563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:08.996183  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.996313  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.996333  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.996544  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.996784  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:08.997137  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.021482  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.313008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.122163  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.562672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.201490  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.222062  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.789455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.321858  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.590279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.422197  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.846529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.521746  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.576211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.621612  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.407003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.722102  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.789355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.760749  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.760777  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.760759  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.763577  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.763642  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.765350  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.765652  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.821608  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.355978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.922011  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.715096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:09.996351  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.996589  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.996607  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.996755  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.996917  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:09.997305  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.022366  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.998059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.122027  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.768649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.201765  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.222102  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.746722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.315851  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.025920579s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.315919  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.025999124s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:10.315941  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.026023605s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:10.315952  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.026034716s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:10.316016  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.026181571s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.316028  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.026193416s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.316041  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.026206555s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.316051  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.026216653s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.316126  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.026225309s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.316143  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.026242921s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.316159  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.026257798s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.316175  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.026274084s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.321736  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.494497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.356899  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.062942055s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.356972  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.063023063s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.357001  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.063052899s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.357014  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 20.063066332s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.357086  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.06309489s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.357097  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.063106135s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.357106  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.063115602s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.357116  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 20.063125262s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.357145  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.063261879s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:10.357165  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.063282564s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:10.357175  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.063292019s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:10.357184  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 20.063301365s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:10.421770  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.571871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.521544  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.381417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.621810  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.617246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.722073  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.737608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.761706  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.761706  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.762301  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.763798  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.763858  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.765585  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.765846  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.821981  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.706828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.922074  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.782192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:10.996486  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.996732  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.996765  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.996892  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.997046  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:10.997375  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.021396  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.173785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.121522  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.35971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.202194  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.221908  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.594904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.322559  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.280187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.422888  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.57466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.521752  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.540101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.621823  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.538905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.721813  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.561424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.762368  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.762368  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.762543  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.763975  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.763982  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.765798  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.766045  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.821736  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.521148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.921407  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.236243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:11.996687  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.996899  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.996922  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.997015  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.997188  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:11.997540  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.021585  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.419391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.122382  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.107049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.202487  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.221869  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.584469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.321934  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.640561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.421903  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.628901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.521831  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.564036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.621735  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.497907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.721972  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.596159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.762554  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.762559  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.762675  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.764150  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.764271  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.765927  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.766185  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.821697  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.50599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.921854  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.628128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:12.996877  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.997119  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.997119  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.997297  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.997310  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:12.997730  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.021647  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.471696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.123075  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.782609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.202689  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.222040  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.722388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.321708  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.461052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.421930  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.581574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.521928  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.7184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.621921  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.703021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.721800  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.540761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.762734  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.762732  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.762791  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.764338  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.764362  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.766213  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.766407  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.821928  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.663792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.922311  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.77106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:13.997190  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.997353  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.997439  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.997608  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.997742  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:13.997996  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.021630  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.41381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.122022  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.712964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.203130  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.221982  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.734223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.321794  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.568564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.421987  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.726263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.521875  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.686531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.621724  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.535961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.721960  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.603206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.762919  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.762921  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.762921  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.764502  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.764505  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.766598  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.766659  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.795650  108295 httplog.go:90] GET /api/v1/namespaces/default: (1.580763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.797024  108295 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (938.62µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.798406  108295 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (958.718µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.821561  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.353445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.922026  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.640616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:14.997430  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.997547  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.997729  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.997735  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.997917  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:14.998153  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.023235  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.806289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:15.122246  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.931648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:15.203360  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.223181  108295 httplog.go:90] GET /api/v1/nodes/node-2: (2.818194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:15.316407  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.026468388s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.316491  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.026571727s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:15.316511  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.026592684s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:15.316532  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.026612309s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:15.316621  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.026786012s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.316634  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.02679986s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.316644  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.026809324s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.316654  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.026819352s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.316686  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.026787051s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.316701  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.026801282s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.316738  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.026839009s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.316748  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.026848572s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.321900  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.663913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:15.357514  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.063553865s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.357580  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.063629776s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.357609  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.063656514s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.357628  108295 node_lifecycle_controller.go:1022] node node-0 hasn't been updated for 25.063679389s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.357709  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.063716613s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.357727  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.063734929s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.357742  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.063749848s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.357761  108295 node_lifecycle_controller.go:1022] node node-1 hasn't been updated for 25.063768856s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.357830  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.063945647s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0920 04:44:15.357898  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.064011694s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:15.357971  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.064085312s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:15.358056  108295 node_lifecycle_controller.go:1022] node node-2 hasn't been updated for 25.064170377s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-09-20 04:43:45 +0000 UTC,LastTransitionTime:2019-09-20 04:43:55 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I0920 04:44:15.421968  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.720594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:15.423780  108295 httplog.go:90] GET /api/v1/nodes/node-2: (1.222846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
Sep 20 04:44:15.424: INFO: Waiting up to 15s for pod "testpod-0" in namespace "taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea" to be "updated with tolerationSeconds of 200"
I0920 04:44:15.426344  108295 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/pods/testpod-0: (1.331663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
Sep 20 04:44:15.427: INFO: Pod "testpod-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.554681ms
Sep 20 04:44:15.427: INFO: Pod "testpod-0" satisfied condition "updated with tolerationSeconds of 200"
I0920 04:44:15.432010  108295 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/pods/testpod-0: (4.627555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:15.432310  108295 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0"}
I0920 04:44:15.432333  108295 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 at 2019-09-20 04:44:15.432330024 +0000 UTC m=+316.225371266
I0920 04:44:15.432337  108295 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0"}
I0920 04:44:15.432358  108295 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 at 2019-09-20 04:44:15.432354832 +0000 UTC m=+316.225396092
I0920 04:44:15.432628  108295 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0
I0920 04:44:15.432652  108295 event.go:255] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0
I0920 04:44:15.434446  108295 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/pods/testpod-0: (693.222µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39038]
I0920 04:44:15.434747  108295 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/events/testpod-0.15c60bed9a978b60: (2.021284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39026]
I0920 04:44:15.434917  108295 httplog.go:90] PATCH /api/v1/namespaces/taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/events/testpod-0.15c60bed9a97a20e: (2.133276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39028]
I0920 04:44:15.438291  108295 node_tree.go:113] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I0920 04:44:15.438334  108295 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 04:44:15.438344  108295 taint_manager.go:422] Noticed node deletion: "node-0"
I0920 04:44:15.439809  108295 node_tree.go:113] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I0920 04:44:15.439835  108295 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 04:44:15.439835  108295 taint_manager.go:422] Noticed node deletion: "node-1"
I0920 04:44:15.441652  108295 httplog.go:90] DELETE /api/v1/nodes: (6.819415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39038]
I0920 04:44:15.441695  108295 node_tree.go:113] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I0920 04:44:15.441780  108295 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 04:44:15.441786  108295 taint_manager.go:422] Noticed node deletion: "node-2"
I0920 04:44:15.763099  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.763130  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.763106  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.764644  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.764647  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.766763  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.766792  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.997640  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.997678  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.997921  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.997919  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.998055  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:15.998285  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
I0920 04:44:16.203584  108295 reflector.go:236] k8s.io/client-go/informers/factory.go:134: forcing resync
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds (35.11s)
        taint_test.go:782: Failed to taint node in test 0 <node-2>, err: timed out waiting for the condition

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190920-043242.xml

Find taint-based-evictions90d6e2e1-3956-4518-a3ea-bce8891909ea/testpod-0 mentions in log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations 34s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations
W0920 04:44:16.443057  108295 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0920 04:44:16.443147  108295 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0920 04:44:16.443228  108295 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0920 04:44:16.443314  108295 master.go:259] Using reconciler: 
I0920 04:44:16.444871  108295 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.445237  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.445418  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.446515  108295 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0920 04:44:16.446562  108295 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.446612  108295 reflector.go:153] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0920 04:44:16.447528  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.447558  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.447775  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.448636  108295 store.go:1342] Monitoring events count at <storage-prefix>//events
I0920 04:44:16.448682  108295 reflector.go:153] Listing and watching *core.Event from storage/cacher.go:/events
I0920 04:44:16.448680  108295 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.448900  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.448931  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.449579  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.449751  108295 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0920 04:44:16.449787  108295 reflector.go:153] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0920 04:44:16.449847  108295 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.450157  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.450265  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.450342  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.451220  108295 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0920 04:44:16.451285  108295 reflector.go:153] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0920 04:44:16.451368  108295 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.451604  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.451629  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.451927  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.452425  108295 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0920 04:44:16.452450  108295 reflector.go:153] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0920 04:44:16.452554  108295 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.452711  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.452730  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.453216  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.453269  108295 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0920 04:44:16.453290  108295 reflector.go:153] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0920 04:44:16.453485  108295 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.453683  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.453706  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.454179  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.454286  108295 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0920 04:44:16.454377  108295 reflector.go:153] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0920 04:44:16.454423  108295 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.454630  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.454650  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.455173  108295 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0920 04:44:16.455232  108295 reflector.go:153] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0920 04:44:16.455336  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.455370  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.455565  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.455598  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.456024  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.456181  108295 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0920 04:44:16.456228  108295 reflector.go:153] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0920 04:44:16.456404  108295 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.456701  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.456758  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.457053  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.457614  108295 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0920 04:44:16.457658  108295 reflector.go:153] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0920 04:44:16.457740  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.457915  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.457939  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.458391  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.458612  108295 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0920 04:44:16.458659  108295 reflector.go:153] Listing and watching *core.Node from storage/cacher.go:/minions
I0920 04:44:16.459739  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.459790  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.460332  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.460435  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.461302  108295 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0920 04:44:16.461351  108295 reflector.go:153] Listing and watching *core.Pod from storage/cacher.go:/pods
I0920 04:44:16.461443  108295 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.461829  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.461917  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.462527  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.462581  108295 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0920 04:44:16.462660  108295 reflector.go:153] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0920 04:44:16.462722  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.462978  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.463052  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.463577  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.464238  108295 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0920 04:44:16.464264  108295 reflector.go:153] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0920 04:44:16.464276  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.465036  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.465117  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.465144  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.466243  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.466272  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.467027  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.467189  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.467249  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.467846  108295 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0920 04:44:16.467871  108295 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0920 04:44:16.467909  108295 reflector.go:153] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0920 04:44:16.468279  108295 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.468495  108295 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.468697  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.469068  108295 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.469576  108295 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.470082  108295 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.470849  108295 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.471240  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.471336  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.471585  108295 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.471991  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.472477  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.472663  108295 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.473201  108295 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.473515  108295 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.473976  108295 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.474117  108295 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.474569  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.474811  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.474942  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.475062  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.475235  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.475368  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.475549  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.476134  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.476356  108295 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.476956  108295 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.477562  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.477786  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.477995  108295 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.478512  108295 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.479117  108295 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.479977  108295 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.480860  108295 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.481601  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.482281  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.482712  108295 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.482819  108295 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0920 04:44:16.482839  108295 master.go:461] Enabling API group "authentication.k8s.io".
I0920 04:44:16.482856  108295 master.go:461] Enabling API group "authorization.k8s.io".
I0920 04:44:16.483106  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.483378  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.483409  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.484265  108295 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:44:16.484341  108295 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:44:16.484420  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.484688  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.484724  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.485368  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.486482  108295 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:44:16.486525  108295 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:44:16.487342  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.486693  108295 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.488187  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.488222  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.489300  108295 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0920 04:44:16.489333  108295 reflector.go:153] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0920 04:44:16.489336  108295 master.go:461] Enabling API group "autoscaling".
I0920 04:44:16.489813  108295 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.490482  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.490515  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.491009  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.491199  108295 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0920 04:44:16.491264  108295 reflector.go:153] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0920 04:44:16.491369  108295 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.491622  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.491653  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.491894  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.492666  108295 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0920 04:44:16.492705  108295 master.go:461] Enabling API group "batch".
I0920 04:44:16.492729  108295 reflector.go:153] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0920 04:44:16.492851  108295 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.493071  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.493099  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.493620  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.493774  108295 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0920 04:44:16.493799  108295 master.go:461] Enabling API group "certificates.k8s.io".
I0920 04:44:16.493857  108295 reflector.go:153] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0920 04:44:16.493906  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.494069  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.494089  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.494808  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.494919  108295 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:44:16.495055  108295 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:44:16.495084  108295 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.495279  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.495312  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.495846  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.496131  108295 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0920 04:44:16.496150  108295 master.go:461] Enabling API group "coordination.k8s.io".
I0920 04:44:16.496162  108295 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0920 04:44:16.496196  108295 reflector.go:153] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0920 04:44:16.496303  108295 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0920 04:44:16.496515  108295 client.go:361] parsed scheme: "endpoint"
I0920 04:44:16.496545  108295 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0920 04:44:16.496977  108295 watch_cache.go:405] Replace watchCache (rev: 59606) 
I0920 04:44:16.497114  108295 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0920 04:44:16.497149  108295 master.go:461] Enabling API group "extensions".
I0920 04:44:16.497161  108295 reflector.go:153] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0920 04:44:16.497289  108295 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"1140b6eb-6a46-42b0-893d-96f119c23a49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging