This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 2 failed / 2897 succeeded
Started2019-10-19 01:18
Elapsed29m40s
Revision
Buildergke-prow-ssd-pool-1a225945-mv4c
Refs master:98fcf2e6
82703:1163a1d5
pod492f3c4b-f20e-11e9-872d-aab87b429caf
infra-commit05bddd2c3
pod492f3c4b-f20e-11e9-872d-aab87b429caf
repok8s.io/kubernetes
repo-commitb4fb305ba0ea51304a1c7dfcf1f5bae053c60529
repos{u'k8s.io/kubernetes': u'master:98fcf2e6c7ffc4e1c0c512abb4d1e787441175f5,82703:1163a1d51ed007ff2c3cd6fe548f60fc0b175a24'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 1m5s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I1019 01:46:43.344298  103889 feature_gate.go:216] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (65.49s)

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191019-013631.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations
W1019 01:46:53.394170  103889 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1019 01:46:53.394206  103889 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1019 01:46:53.394222  103889 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1019 01:46:53.394235  103889 master.go:261] Using reconciler: 
I1019 01:46:53.397302  103889 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.397615  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.397781  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.399530  103889 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1019 01:46:53.399620  103889 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1019 01:46:53.399657  103889 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.400168  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.400194  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.401012  103889 store.go:1342] Monitoring events count at <storage-prefix>//events
I1019 01:46:53.401080  103889 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.401099  103889 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1019 01:46:53.401214  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.401239  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.401339  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.402302  103889 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1019 01:46:53.402361  103889 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.402496  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.402517  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.402587  103889 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1019 01:46:53.402777  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.404299  103889 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1019 01:46:53.404502  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.404626  103889 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.404755  103889 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1019 01:46:53.404807  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.404834  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.405445  103889 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1019 01:46:53.405627  103889 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.405704  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.405760  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.405774  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.405837  103889 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1019 01:46:53.406730  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.407595  103889 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1019 01:46:53.407746  103889 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1019 01:46:53.407845  103889 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.408068  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.408120  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.408676  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.408858  103889 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1019 01:46:53.408889  103889 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1019 01:46:53.409041  103889 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.409258  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.409287  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.409600  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.410865  103889 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1019 01:46:53.410904  103889 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1019 01:46:53.411073  103889 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.411160  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.411173  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.411936  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.412192  103889 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1019 01:46:53.412237  103889 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1019 01:46:53.412390  103889 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.412512  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.412525  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.413501  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.414301  103889 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1019 01:46:53.414325  103889 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1019 01:46:53.414445  103889 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.414561  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.414575  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.415240  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.415268  103889 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1019 01:46:53.415318  103889 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1019 01:46:53.415539  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.415651  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.415669  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.416033  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.417879  103889 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1019 01:46:53.417938  103889 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1019 01:46:53.418081  103889 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.418186  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.418200  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.418924  103889 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1019 01:46:53.418949  103889 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1019 01:46:53.419284  103889 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.419478  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.419504  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.419975  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.420390  103889 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1019 01:46:53.420452  103889 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1019 01:46:53.420457  103889 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.420599  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.420618  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.421334  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.421367  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.422357  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.422397  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.423519  103889 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.423785  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.423862  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.424645  103889 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1019 01:46:53.424756  103889 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1019 01:46:53.424977  103889 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1019 01:46:53.425576  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.427325  103889 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.427707  103889 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.429243  103889 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.430155  103889 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.431908  103889 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.432816  103889 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.433506  103889 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.433872  103889 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.434204  103889 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.434874  103889 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.436148  103889 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.436482  103889 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.437407  103889 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.437865  103889 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.439310  103889 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.439842  103889 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.440722  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.441223  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.441535  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.441860  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.442254  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.442548  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.442839  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.444651  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.445247  103889 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.446175  103889 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.447789  103889 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.448187  103889 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.448605  103889 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.449451  103889 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.450083  103889 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.451584  103889 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.452362  103889 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.453142  103889 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.454527  103889 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.454968  103889 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.455193  103889 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1019 01:46:53.455298  103889 master.go:464] Enabling API group "authentication.k8s.io".
I1019 01:46:53.455500  103889 master.go:464] Enabling API group "authorization.k8s.io".
I1019 01:46:53.455859  103889 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.456190  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.456337  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.457358  103889 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1019 01:46:53.457463  103889 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1019 01:46:53.457626  103889 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.458023  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.458161  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.458998  103889 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1019 01:46:53.459121  103889 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1019 01:46:53.459224  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.459236  103889 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.459366  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.459387  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.460246  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.461395  103889 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1019 01:46:53.461423  103889 master.go:464] Enabling API group "autoscaling".
I1019 01:46:53.461513  103889 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1019 01:46:53.461770  103889 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.461979  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.462013  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.462541  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.462836  103889 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1019 01:46:53.462900  103889 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1019 01:46:53.463029  103889 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.463141  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.463168  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.464377  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.465445  103889 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1019 01:46:53.465546  103889 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1019 01:46:53.465549  103889 master.go:464] Enabling API group "batch".
I1019 01:46:53.465828  103889 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.466005  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.466027  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.466585  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.467826  103889 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1019 01:46:53.467971  103889 master.go:464] Enabling API group "certificates.k8s.io".
I1019 01:46:53.467924  103889 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1019 01:46:53.468817  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.469000  103889 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.469270  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.469553  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.470827  103889 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1019 01:46:53.470961  103889 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1019 01:46:53.471216  103889 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.471455  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.471548  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.471638  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.472371  103889 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1019 01:46:53.472434  103889 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1019 01:46:53.472528  103889 master.go:464] Enabling API group "coordination.k8s.io".
I1019 01:46:53.472604  103889 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1019 01:46:53.472908  103889 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.473113  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.476924  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.476999  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.480031  103889 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1019 01:46:53.480078  103889 master.go:464] Enabling API group "extensions".
I1019 01:46:53.480300  103889 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.480478  103889 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1019 01:46:53.480514  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.480540  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.481272  103889 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1019 01:46:53.481311  103889 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1019 01:46:53.481571  103889 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.481753  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.481789  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.482265  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.482319  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.482826  103889 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1019 01:46:53.482868  103889 master.go:464] Enabling API group "networking.k8s.io".
I1019 01:46:53.482944  103889 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1019 01:46:53.482935  103889 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.483110  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.483132  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.483786  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.483876  103889 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1019 01:46:53.483923  103889 master.go:464] Enabling API group "node.k8s.io".
I1019 01:46:53.483928  103889 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1019 01:46:53.484161  103889 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.484284  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.484296  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.485021  103889 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1019 01:46:53.485112  103889 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1019 01:46:53.485207  103889 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.485290  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.485386  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.485413  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.486208  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.486255  103889 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1019 01:46:53.486273  103889 master.go:464] Enabling API group "policy".
I1019 01:46:53.486326  103889 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.486487  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.486509  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.486965  103889 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1019 01:46:53.487034  103889 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1019 01:46:53.487107  103889 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1019 01:46:53.487217  103889 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.487335  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.487358  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.487627  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.488720  103889 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1019 01:46:53.488774  103889 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.488880  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.488896  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.488949  103889 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1019 01:46:53.489637  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.489994  103889 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1019 01:46:53.490034  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.490088  103889 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1019 01:46:53.490208  103889 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.490328  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.490347  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.491319  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.492213  103889 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1019 01:46:53.492274  103889 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1019 01:46:53.492278  103889 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.492393  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.492475  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.493569  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.493795  103889 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1019 01:46:53.493857  103889 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1019 01:46:53.493998  103889 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.494110  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.494138  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.494810  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.494880  103889 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1019 01:46:53.494977  103889 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.495083  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.495102  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.494995  103889 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1019 01:46:53.495829  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.495943  103889 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1019 01:46:53.496050  103889 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1019 01:46:53.496142  103889 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.496245  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.496270  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.496880  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.497025  103889 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1019 01:46:53.497061  103889 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1019 01:46:53.497063  103889 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1019 01:46:53.499434  103889 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.499590  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.499611  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.500008  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.501273  103889 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1019 01:46:53.501429  103889 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1019 01:46:53.502244  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.502603  103889 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.502752  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.502777  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.503515  103889 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1019 01:46:53.503546  103889 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1019 01:46:53.503547  103889 master.go:464] Enabling API group "scheduling.k8s.io".
I1019 01:46:53.503834  103889 master.go:453] Skipping disabled API group "settings.k8s.io".
I1019 01:46:53.504085  103889 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.504264  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.504282  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.504317  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.505885  103889 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1019 01:46:53.505965  103889 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1019 01:46:53.506162  103889 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.506357  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.506379  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.506968  103889 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1019 01:46:53.507031  103889 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.506890  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.507099  103889 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1019 01:46:53.507142  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.507154  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.508882  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.509267  103889 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1019 01:46:53.509341  103889 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.509427  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.509439  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.509507  103889 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1019 01:46:53.510266  103889 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1019 01:46:53.510294  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.510310  103889 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1019 01:46:53.510475  103889 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.510595  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.510609  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.511307  103889 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1019 01:46:53.511328  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.511381  103889 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1019 01:46:53.511664  103889 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.511921  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.511952  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.512472  103889 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1019 01:46:53.512584  103889 master.go:464] Enabling API group "storage.k8s.io".
I1019 01:46:53.512754  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.512943  103889 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.513104  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.513188  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.512507  103889 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1019 01:46:53.514027  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.514508  103889 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1019 01:46:53.514648  103889 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1019 01:46:53.514811  103889 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.515048  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.515486  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.515774  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.516520  103889 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1019 01:46:53.516723  103889 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1019 01:46:53.516722  103889 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.517975  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.518007  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.518134  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.519747  103889 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1019 01:46:53.519808  103889 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1019 01:46:53.520001  103889 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.520131  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.520148  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.520845  103889 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1019 01:46:53.521075  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.521163  103889 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1019 01:46:53.521100  103889 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.521286  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.521304  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.521758  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.522875  103889 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1019 01:46:53.522894  103889 master.go:464] Enabling API group "apps".
I1019 01:46:53.522924  103889 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1019 01:46:53.522940  103889 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.523011  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.523026  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.523624  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.524600  103889 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1019 01:46:53.524651  103889 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.524768  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.524786  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.524880  103889 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1019 01:46:53.525620  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.526511  103889 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1019 01:46:53.526584  103889 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1019 01:46:53.526583  103889 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.526739  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.526764  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.527513  103889 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1019 01:46:53.527559  103889 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.527569  103889 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1019 01:46:53.527620  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.527631  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.527646  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.528128  103889 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1019 01:46:53.528310  103889 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1019 01:46:53.528314  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.528385  103889 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.528407  103889 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1019 01:46:53.529131  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:53.529148  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:53.529295  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.529628  103889 store.go:1342] Monitoring events count at <storage-prefix>//events
I1019 01:46:53.529646  103889 master.go:464] Enabling API group "events.k8s.io".
I1019 01:46:53.529804  103889 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1019 01:46:53.530209  103889 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.530424  103889 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.530745  103889 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.530875  103889 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.530998  103889 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.531035  103889 watch_cache.go:409] Replace watchCache (rev: 54984) 
I1019 01:46:53.531072  103889 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.531205  103889 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.531332  103889 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.532012  103889 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.532184  103889 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.533367  103889 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.533763  103889 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.534876  103889 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.535116  103889 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.535904  103889 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.536130  103889 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.537353  103889 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.537569  103889 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.538171  103889 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.538430  103889 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1019 01:46:53.538463  103889 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1019 01:46:53.539562  103889 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.539759  103889 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.540003  103889 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.540754  103889 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.541470  103889 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.544730  103889 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.545579  103889 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.547149  103889 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.547952  103889 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.548335  103889 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.549090  103889 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1019 01:46:53.549326  103889 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1019 01:46:53.550835  103889 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.551316  103889 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.552047  103889 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.552756  103889 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.553783  103889 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.554482  103889 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.555203  103889 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.556559  103889 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.557085  103889 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.558077  103889 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.559261  103889 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1019 01:46:53.559486  103889 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1019 01:46:53.560375  103889 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.561133  103889 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1019 01:46:53.561345  103889 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1019 01:46:53.561754  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:53.561927  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:53.561945  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:53.561897  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:53.561919  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:53.561941  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:53.562043  103889 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.562401  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:53.563825  103889 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.564172  103889 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.564875  103889 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.566228  103889 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.568272  103889 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.568930  103889 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1019 01:46:53.568998  103889 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1019 01:46:53.570038  103889 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.571903  103889 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.572539  103889 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.573772  103889 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.574289  103889 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.574984  103889 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.577129  103889 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.577495  103889 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.577887  103889 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.579342  103889 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.580724  103889 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.581039  103889 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1019 01:46:53.581101  103889 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1019 01:46:53.581113  103889 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1019 01:46:53.581900  103889 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.582646  103889 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.584180  103889 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.584728  103889 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.585634  103889 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"dd1980d2-093b-4941-9c51-127e644c1645", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1019 01:46:53.589967  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:53.589996  103889 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1019 01:46:53.590006  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:53.590017  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:53.590026  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:53.590037  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:53.590077  103889 httplog.go:90] GET /healthz: (304.64µs) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:53.591264  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (982.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.596297  103889 httplog.go:90] GET /api/v1/services: (1.103365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.601319  103889 httplog.go:90] GET /api/v1/services: (1.155539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.603484  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:53.603513  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:53.603523  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:53.603532  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:53.603541  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:53.603565  103889 httplog.go:90] GET /healthz: (188.769µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.605589  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.333076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:53.605621  103889 httplog.go:90] GET /api/v1/services: (933.731µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53566]
I1019 01:46:53.605759  103889 httplog.go:90] GET /api/v1/services: (1.382438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.608139  103889 httplog.go:90] POST /api/v1/namespaces: (2.196626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53566]
I1019 01:46:53.609236  103889 httplog.go:90] GET /api/v1/namespaces/kube-public: (801.875µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.611310  103889 httplog.go:90] POST /api/v1/namespaces: (1.708749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.613641  103889 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.321562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.618469  103889 httplog.go:90] POST /api/v1/namespaces: (1.840899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.691290  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:53.691326  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:53.691345  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:53.691353  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:53.691362  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:53.691412  103889 httplog.go:90] GET /healthz: (282.896µs) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:53.704840  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:53.704875  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:53.704884  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:53.704890  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:53.704896  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:53.704934  103889 httplog.go:90] GET /healthz: (260.942µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.790941  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:53.790978  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:53.790990  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:53.790997  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:53.791005  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:53.791043  103889 httplog.go:90] GET /healthz: (294.166µs) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:53.804946  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:53.804990  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:53.805002  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:53.805015  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:53.805022  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:53.805057  103889 httplog.go:90] GET /healthz: (307.261µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.890819  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:53.890868  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:53.890881  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:53.890891  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:53.890899  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:53.890935  103889 httplog.go:90] GET /healthz: (278.577µs) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:53.904877  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:53.905040  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:53.905096  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:53.905132  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:53.905174  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:53.905327  103889 httplog.go:90] GET /healthz: (670.344µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:53.990791  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:53.990837  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:53.990846  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:53.990853  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:53.990859  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:53.990903  103889 httplog.go:90] GET /healthz: (285.845µs) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:54.004887  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:54.004931  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.004953  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.004964  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.004972  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.005011  103889 httplog.go:90] GET /healthz: (329.66µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.031410  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.031476  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.031427  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.032777  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.034290  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.035716  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.091890  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:54.091928  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.091941  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.091950  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.091958  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.092052  103889 httplog.go:90] GET /healthz: (331.577µs) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:54.104819  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:54.104860  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.104871  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.104880  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.104887  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.104927  103889 httplog.go:90] GET /healthz: (313.949µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.191987  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:54.192015  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.192024  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.192037  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.192042  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.192068  103889 httplog.go:90] GET /healthz: (276.529µs) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:54.204800  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:54.204842  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.204855  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.204863  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.204871  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.204907  103889 httplog.go:90] GET /healthz: (290.18µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.235836  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.290757  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:54.290782  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.290798  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.290804  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.290809  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.290831  103889 httplog.go:90] GET /healthz: (213.341µs) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:54.304747  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:54.304778  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.304787  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.304793  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.304810  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.304843  103889 httplog.go:90] GET /healthz: (296.18µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.390844  103889 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1019 01:46:54.390895  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.390908  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.390918  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.390926  103889 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.390971  103889 httplog.go:90] GET /healthz: (332.301µs) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:54.393976  103889 client.go:357] parsed scheme: "endpoint"
I1019 01:46:54.394066  103889 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1019 01:46:54.406203  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.406231  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.406242  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.406251  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.406319  103889 httplog.go:90] GET /healthz: (1.731951ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.491541  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.491574  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.491605  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.491612  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.491683  103889 httplog.go:90] GET /healthz: (1.056133ms) 0 [Go-http-client/1.1 127.0.0.1:53562]
I1019 01:46:54.506234  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.506263  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.506273  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.506281  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.506324  103889 httplog.go:90] GET /healthz: (1.712111ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.562137  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.562168  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.562137  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.562159  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.562198  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.562219  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.562514  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:54.590983  103889 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.192202ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.591200  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.41505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.591874  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.591903  103889 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1019 01:46:54.591914  103889 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1019 01:46:54.591922  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1019 01:46:54.591967  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (944.369µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I1019 01:46:54.591987  103889 httplog.go:90] GET /healthz: (991.242µs) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:54.593167  103889 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.335255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.593352  103889 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.928978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.593511  103889 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1019 01:46:54.594326  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.799396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I1019 01:46:54.595757  103889 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.06983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.595858  103889 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.396549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.595941  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.298597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I1019 01:46:54.597343  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.051627ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.598608  103889 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.591179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.598911  103889 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1019 01:46:54.598932  103889 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1019 01:46:54.599833  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.231552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53562]
I1019 01:46:54.601141  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (868.181µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.602340  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (879.273µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.603938  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.260317ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.606531  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.606555  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:54.606585  103889 httplog.go:90] GET /healthz: (1.126178ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.607985  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (3.740713ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.609446  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.103175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.611878  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.013478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.612046  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1019 01:46:54.612964  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (760.307µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.615001  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.595635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.615249  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1019 01:46:54.617541  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.816861ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.619627  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.693285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.619866  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1019 01:46:54.621174  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.055978ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.623149  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.542473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.623387  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1019 01:46:54.624664  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (935.2µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.628431  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.163168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.628683  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1019 01:46:54.630023  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (930.641µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.632613  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.209953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.632777  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1019 01:46:54.635420  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.493556ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.637589  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.6829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.637803  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1019 01:46:54.638823  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (741.863µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.642301  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.6772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.642480  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1019 01:46:54.643681  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (907.283µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.646479  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.309318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.646798  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1019 01:46:54.647919  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (854.752µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.650667  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.419719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.650967  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1019 01:46:54.651991  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (863.8µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.654257  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.93619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.654521  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1019 01:46:54.655572  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (883.781µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.657903  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.880087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.658813  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1019 01:46:54.660020  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (944.592µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.662280  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.656937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.662760  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1019 01:46:54.663778  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (775.346µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.666201  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.97465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.666521  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1019 01:46:54.668070  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.117128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.670318  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.681104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.670708  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1019 01:46:54.672619  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.223021ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.675130  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.0484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.676368  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1019 01:46:54.678358  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.733709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.680264  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.392203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.680537  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1019 01:46:54.681803  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.059368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.687095  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.785913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.688481  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1019 01:46:54.690378  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.581734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.692283  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.692303  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:54.692334  103889 httplog.go:90] GET /healthz: (1.774524ms) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:54.695858  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.339221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.696100  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1019 01:46:54.697678  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.34733ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.700717  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.449338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.700893  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1019 01:46:54.702090  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (983.575µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.704121  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.632786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.704619  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1019 01:46:54.706369  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.706391  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:54.706413  103889 httplog.go:90] GET /healthz: (1.927638ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.707479  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (2.471948ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.709509  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.511755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.709871  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1019 01:46:54.711505  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.327761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.713249  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.321529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.713426  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1019 01:46:54.715302  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.714575ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.717177  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.506614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.717664  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1019 01:46:54.719350  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.360511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.721251  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.473684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.721645  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1019 01:46:54.723257  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (782.847µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.725880  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.299364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.726171  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1019 01:46:54.727293  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (878.646µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.730428  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.676436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.730736  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1019 01:46:54.732366  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.35506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.735355  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.311414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.736642  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1019 01:46:54.742684  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (4.314697ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.746406  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.667036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.746602  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1019 01:46:54.747747  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (912.36µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.750098  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.516265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.750621  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1019 01:46:54.751735  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (840.236µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.753642  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.515841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.754035  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1019 01:46:54.755234  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (883.986µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.757277  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.602878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.757597  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1019 01:46:54.758958  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.042807ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.762983  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.396921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.763274  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1019 01:46:54.764540  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (932.006µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.767056  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.712333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.767257  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1019 01:46:54.768088  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (582.391µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.770334  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.890372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.770614  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1019 01:46:54.771717  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (880.001µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.773968  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.876577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.774188  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1019 01:46:54.782114  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (3.255438ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.784493  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.827909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.787276  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1019 01:46:54.788384  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (922.218µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.792509  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.792592  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:54.792712  103889 httplog.go:90] GET /healthz: (1.858748ms) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:54.792957  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.150536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.793228  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1019 01:46:54.794864  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.375397ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.799989  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.830933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.800181  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1019 01:46:54.801438  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (997.106µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.804904  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.023512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.805165  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1019 01:46:54.805413  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.805447  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:54.805475  103889 httplog.go:90] GET /healthz: (741.712µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.806198  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (814.082µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.807879  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.808197  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1019 01:46:54.809551  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.123341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.813253  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.306287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.813537  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1019 01:46:54.814734  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (963.054µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.816620  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.816898  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1019 01:46:54.817931  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (845.535µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.819645  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.382819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.820418  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1019 01:46:54.821594  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (756.112µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.827603  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.426845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.827958  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1019 01:46:54.829625  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.470974ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.832331  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.744584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.832595  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1019 01:46:54.833668  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (858.139µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.835472  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.34517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.835682  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1019 01:46:54.836616  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (732.336µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.839825  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.27858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.840010  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1019 01:46:54.842023  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.747485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.844581  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.101599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.844876  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1019 01:46:54.846175  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.096791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.849097  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.515262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.849270  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1019 01:46:54.850268  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (825.348µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.853282  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.902036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.853737  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1019 01:46:54.854978  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (938.76µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.857791  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.928035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.858166  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1019 01:46:54.872247  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.376214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:54.892635  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.892673  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:54.892781  103889 httplog.go:90] GET /healthz: (2.208039ms) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:54.894962  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.92566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.895415  103889 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1019 01:46:54.906272  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.906301  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:54.906348  103889 httplog.go:90] GET /healthz: (1.747383ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.912564  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.501583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.933119  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.409159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.933347  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1019 01:46:54.951989  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.221287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.972759  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.852244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.973003  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1019 01:46:54.993322  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.289705ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:54.996445  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:54.996475  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:54.996531  103889 httplog.go:90] GET /healthz: (4.67311ms) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:55.006661  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.006719  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.006771  103889 httplog.go:90] GET /healthz: (1.093404ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.012568  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.77235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.012917  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1019 01:46:55.031656  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.031740  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.031677  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.032137  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.304878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.032954  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.034594  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.035864  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.053091  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.122763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.053324  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1019 01:46:55.072325  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.352089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.093654  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.820519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.093926  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1019 01:46:55.106784  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.106834  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.106791  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.106924  103889 httplog.go:90] GET /healthz: (2.287016ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.106933  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.107004  103889 httplog.go:90] GET /healthz: (16.396629ms) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:55.112446  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.595871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.134827  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.886117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.135133  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1019 01:46:55.152424  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.576831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.172681  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.836422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.172946  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1019 01:46:55.191582  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.191612  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.191658  103889 httplog.go:90] GET /healthz: (1.022655ms) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:55.192068  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.422716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.206502  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.206534  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.206574  103889 httplog.go:90] GET /healthz: (1.832335ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.213653  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.777898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.213906  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1019 01:46:55.232035  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.1538ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.236026  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.252883  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.98269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.253262  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1019 01:46:55.271626  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (865.26µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.291378  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.291405  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.291443  103889 httplog.go:90] GET /healthz: (804.822µs) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:55.293315  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.639746ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.293542  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1019 01:46:55.306545  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.306579  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.306623  103889 httplog.go:90] GET /healthz: (1.25024ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.311936  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.114682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.332720  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.889113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.333004  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1019 01:46:55.351964  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.11001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.373066  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.144821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.373313  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1019 01:46:55.392587  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.860785ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.393818  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.393843  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.393875  103889 httplog.go:90] GET /healthz: (3.204826ms) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:55.405980  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.406020  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.406055  103889 httplog.go:90] GET /healthz: (1.362402ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.414591  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.767883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.414828  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1019 01:46:55.432179  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.426745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.453207  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.321358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.453625  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1019 01:46:55.472037  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.162656ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.491481  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.491515  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.491553  103889 httplog.go:90] GET /healthz: (877.004µs) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:55.494156  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.780898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.494475  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1019 01:46:55.505778  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.505807  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.506156  103889 httplog.go:90] GET /healthz: (1.521248ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.511934  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.141253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.532777  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.909781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.533004  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1019 01:46:55.552087  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.191285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.562318  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.562348  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.562363  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.562378  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.562383  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.562449  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.562604  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:55.574125  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.289484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.574396  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1019 01:46:55.593404  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.593446  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.593490  103889 httplog.go:90] GET /healthz: (2.816359ms) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:55.595427  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (4.704488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.606294  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.606349  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.606388  103889 httplog.go:90] GET /healthz: (1.780887ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.612924  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.127968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.613167  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1019 01:46:55.632966  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (2.146092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.652947  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.078828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.653249  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1019 01:46:55.672485  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.638836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.691642  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.691686  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.691939  103889 httplog.go:90] GET /healthz: (1.31837ms) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:55.694503  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.841063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.695679  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1019 01:46:55.706328  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.706366  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.706412  103889 httplog.go:90] GET /healthz: (1.735054ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.711903  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.098933ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.733781  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.925617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.734228  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1019 01:46:55.753382  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (2.529494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.773306  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.428498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.773607  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1019 01:46:55.791892  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.791917  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.791970  103889 httplog.go:90] GET /healthz: (956.182µs) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:55.792963  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.569917ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.813500  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.813536  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.813595  103889 httplog.go:90] GET /healthz: (7.100546ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.819471  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.659915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.819822  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1019 01:46:55.831964  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.194727ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.853146  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.343629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.854343  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1019 01:46:55.874582  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (3.635075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:55.893494  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.893523  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.893561  103889 httplog.go:90] GET /healthz: (2.968479ms) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:55.896661  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.952318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.898032  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1019 01:46:55.905534  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.905593  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.905642  103889 httplog.go:90] GET /healthz: (1.00704ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.913094  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (2.286845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.933901  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.745616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.934188  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1019 01:46:55.953125  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (2.328607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.973994  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.453249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:55.974218  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1019 01:46:55.991764  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:55.991796  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:55.991828  103889 httplog.go:90] GET /healthz: (1.2402ms) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:55.991826  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.056421ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.006425  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.006478  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.006515  103889 httplog.go:90] GET /healthz: (1.958062ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.014133  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.323834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.014377  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1019 01:46:56.031891  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.031924  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.031949  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.032183  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.392363ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.033104  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.034788  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.036016  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.053460  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.520986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.053748  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1019 01:46:56.072652  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.840272ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.091653  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.091714  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.091754  103889 httplog.go:90] GET /healthz: (1.007049ms) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:56.093005  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.735196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.093224  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1019 01:46:56.105914  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.105951  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.105999  103889 httplog.go:90] GET /healthz: (1.418483ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.113000  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (2.220906ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.132939  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.069913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.133217  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1019 01:46:56.152233  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.411796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.173009  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.133004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.173279  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1019 01:46:56.191971  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.191999  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.192029  103889 httplog.go:90] GET /healthz: (1.395286ms) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:56.192373  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.681275ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.205423  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.205458  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.205498  103889 httplog.go:90] GET /healthz: (921.699µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.212919  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.125642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.213154  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1019 01:46:56.233078  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (2.164443ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.236262  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.253748  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.907573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.254213  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1019 01:46:56.275452  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (3.877554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.293578  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.739987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.293833  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1019 01:46:56.293930  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.293950  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.293980  103889 httplog.go:90] GET /healthz: (2.186306ms) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:56.308505  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.308543  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.308630  103889 httplog.go:90] GET /healthz: (1.365984ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.312879  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (2.121059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.333295  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.298891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.333556  103889 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1019 01:46:56.352460  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.606554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.355681  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.598497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.374464  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.60167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.374739  103889 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1019 01:46:56.391669  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.391802  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.391944  103889 httplog.go:90] GET /healthz: (1.298543ms) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:56.392822  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.99066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.394857  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.470849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.407357  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.407941  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.408547  103889 httplog.go:90] GET /healthz: (3.639383ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.414921  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.761263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.415472  103889 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1019 01:46:56.432055  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.371289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.436612  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.666803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.454755  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.896372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.456434  103889 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1019 01:46:56.471984  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.173297ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.475800  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.087243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.491992  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.492041  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.492076  103889 httplog.go:90] GET /healthz: (1.02526ms) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:56.493812  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.886476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.494187  103889 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1019 01:46:56.505659  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.505816  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.506142  103889 httplog.go:90] GET /healthz: (1.482196ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.511846  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (994.143µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.513304  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.016588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.533328  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.443537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.533559  103889 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1019 01:46:56.552028  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.185817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.553613  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.041245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.562491  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.562503  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.562535  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.562538  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.562594  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.562747  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.563742  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:56.572777  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.872736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.573063  103889 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1019 01:46:56.593106  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.969757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.594728  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.594756  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.594824  103889 httplog.go:90] GET /healthz: (3.949302ms) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:56.595362  103889 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.492543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.606064  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.606109  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.606149  103889 httplog.go:90] GET /healthz: (1.369509ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.615040  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (4.128837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.616035  103889 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1019 01:46:56.631777  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.007547ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.633657  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.378739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.652673  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.783549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.652982  103889 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1019 01:46:56.673127  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.318779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.676012  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.582791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.691477  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.691517  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.691573  103889 httplog.go:90] GET /healthz: (934.104µs) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:56.693388  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.715386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.693627  103889 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1019 01:46:56.705879  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.705913  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.705966  103889 httplog.go:90] GET /healthz: (1.291645ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.715747  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (2.430004ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.718338  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.998787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.733100  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.964025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.733369  103889 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1019 01:46:56.752304  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.475467ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.756211  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.268464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.772854  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.002888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.773143  103889 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1019 01:46:56.791843  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.791881  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.791913  103889 httplog.go:90] GET /healthz: (844.324µs) 0 [Go-http-client/1.1 127.0.0.1:54152]
I1019 01:46:56.794730  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (3.953003ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.796603  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.456093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.806456  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.806492  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.806542  103889 httplog.go:90] GET /healthz: (1.903012ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.815850  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.070415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.816087  103889 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1019 01:46:56.832834  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (2.017577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.834748  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.395209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.842878  103889 httplog.go:90] GET /api/v1/namespaces/default: (1.387651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:46:56.844469  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.104171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:46:56.846729  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.762428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:46:56.852273  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.517301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.852491  103889 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1019 01:46:56.871783  103889 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (981.918µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.873815  103889 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.536513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.892416  103889 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1019 01:46:56.892447  103889 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1019 01:46:56.892471  103889 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.632957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:56.892489  103889 httplog.go:90] GET /healthz: (1.85302ms) 0 [Go-http-client/1.1 127.0.0.1:53564]
I1019 01:46:56.892742  103889 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1019 01:46:56.905846  103889 httplog.go:90] GET /healthz: (1.240982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.907228  103889 httplog.go:90] GET /api/v1/namespaces/default: (1.026571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.909346  103889 httplog.go:90] POST /api/v1/namespaces: (1.666434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.910658  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (918.078µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.915413  103889 httplog.go:90] POST /api/v1/namespaces/default/services: (4.257174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.917274  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.226501ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.919795  103889 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.163393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.991974  103889 httplog.go:90] GET /healthz: (1.308354ms) 200 [Go-http-client/1.1 127.0.0.1:53564]
W1019 01:46:56.993878  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.993927  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.993995  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.994011  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.994034  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.994067  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.994078  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.994088  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.994098  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.994106  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.994120  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:56.994132  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1019 01:46:56.994151  103889 factory.go:281] Creating scheduler from algorithm provider 'DefaultProvider'
I1019 01:46:56.994161  103889 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1019 01:46:56.994501  103889 shared_informer.go:197] Waiting for caches to sync for scheduler
I1019 01:46:56.995033  103889 reflector.go:150] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:204
I1019 01:46:56.995051  103889 reflector.go:185] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:204
I1019 01:46:56.997559  103889 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (700.983µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53564]
I1019 01:46:56.999320  103889 get.go:251] Starting watch for /api/v1/pods, rv=54984 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m26s
I1019 01:46:57.032083  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.032093  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.032202  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.033278  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.034957  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.036181  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.094683  103889 shared_informer.go:227] caches populated
I1019 01:46:57.094747  103889 shared_informer.go:204] Caches are synced for scheduler 
I1019 01:46:57.095087  103889 reflector.go:150] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.095117  103889 reflector.go:185] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.095663  103889 reflector.go:150] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.095699  103889 reflector.go:185] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.096011  103889 reflector.go:150] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.096035  103889 reflector.go:150] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.096041  103889 reflector.go:185] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.096047  103889 reflector.go:185] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.096107  103889 reflector.go:150] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.096122  103889 reflector.go:185] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.097052  103889 reflector.go:150] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.097067  103889 reflector.go:185] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.097094  103889 reflector.go:150] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.097106  103889 reflector.go:185] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.097310  103889 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (1.07231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:57.097515  103889 reflector.go:150] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.097530  103889 reflector.go:185] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.097540  103889 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (492.017µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55332]
I1019 01:46:57.098297  103889 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=54984 labels= fields= timeout=5m19s
I1019 01:46:57.098315  103889 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (336.714µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I1019 01:46:57.098398  103889 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (312.591µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55332]
I1019 01:46:57.098426  103889 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (630.907µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55334]
I1019 01:46:57.098790  103889 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=54984 labels= fields= timeout=6m5s
I1019 01:46:57.098930  103889 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (410.655µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55340]
I1019 01:46:57.098993  103889 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (437.669µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55336]
I1019 01:46:57.099341  103889 get.go:251] Starting watch for /api/v1/services, rv=56302 labels= fields= timeout=7m57s
I1019 01:46:57.099352  103889 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=54984 labels= fields= timeout=8m51s
I1019 01:46:57.099402  103889 get.go:251] Starting watch for /api/v1/nodes, rv=54984 labels= fields= timeout=8m35s
I1019 01:46:57.099530  103889 reflector.go:150] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.099543  103889 reflector.go:185] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.099598  103889 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=54984 labels= fields= timeout=8m12s
I1019 01:46:57.099659  103889 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=54984 labels= fields= timeout=5m6s
I1019 01:46:57.100386  103889 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (444.727µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55346]
I1019 01:46:57.100580  103889 reflector.go:150] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.100716  103889 reflector.go:185] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.101955  103889 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=54984 labels= fields= timeout=8m10s
I1019 01:46:57.102185  103889 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (1.189142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55346]
I1019 01:46:57.102378  103889 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (675.334µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55350]
I1019 01:46:57.103080  103889 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=54984 labels= fields= timeout=5m21s
I1019 01:46:57.103335  103889 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=54984 labels= fields= timeout=7m54s
I1019 01:46:57.195029  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195067  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195074  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195080  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195086  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195091  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195097  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195112  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195118  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195129  103889 shared_informer.go:227] caches populated
I1019 01:46:57.195412  103889 shared_informer.go:227] caches populated
I1019 01:46:57.197670  103889 httplog.go:90] POST /api/v1/namespaces: (1.783415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55366]
I1019 01:46:57.198622  103889 node_lifecycle_controller.go:329] Sending events to api server.
I1019 01:46:57.199855  103889 node_lifecycle_controller.go:361] Controller is using taint based evictions.
W1019 01:46:57.199888  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1019 01:46:57.199950  103889 taint_manager.go:162] Sending events to api server.
I1019 01:46:57.200310  103889 node_lifecycle_controller.go:455] Controller will reconcile labels.
W1019 01:46:57.200376  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1019 01:46:57.200406  103889 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1019 01:46:57.200480  103889 node_lifecycle_controller.go:488] Starting node controller
I1019 01:46:57.200731  103889 reflector.go:150] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.200746  103889 reflector.go:185] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.201023  103889 shared_informer.go:197] Waiting for caches to sync for taint
I1019 01:46:57.202202  103889 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (906.293µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55366]
I1019 01:46:57.203483  103889 get.go:251] Starting watch for /api/v1/namespaces, rv=56444 labels= fields= timeout=6m8s
I1019 01:46:57.236767  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.300637  103889 shared_informer.go:227] caches populated
I1019 01:46:57.301281  103889 reflector.go:150] Starting reflector *v1.Lease (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.301310  103889 reflector.go:185] Listing and watching *v1.Lease from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.301816  103889 reflector.go:150] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.301834  103889 reflector.go:185] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.302122  103889 reflector.go:150] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.302141  103889 reflector.go:185] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:134
I1019 01:46:57.302973  103889 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (564.257µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55384]
I1019 01:46:57.302989  103889 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?limit=500&resourceVersion=0: (618.643µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55382]
I1019 01:46:57.303004  103889 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (386.378µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55386]
I1019 01:46:57.303904  103889 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=54984 labels= fields= timeout=9m1s
I1019 01:46:57.303984  103889 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=54984 labels= fields= timeout=8m42s
I1019 01:46:57.304005  103889 get.go:251] Starting watch for /api/v1/pods, rv=54984 labels= fields= timeout=9m57s
I1019 01:46:57.305666  103889 shared_informer.go:227] caches populated
I1019 01:46:57.305823  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306084  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306199  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306252  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306321  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306426  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306513  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306599  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306653  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306760  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306845  103889 shared_informer.go:227] caches populated
I1019 01:46:57.306938  103889 shared_informer.go:227] caches populated
I1019 01:46:57.310951  103889 httplog.go:90] POST /api/v1/nodes: (3.138826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.311318  103889 node_tree.go:86] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I1019 01:46:57.314106  103889 httplog.go:90] POST /api/v1/nodes: (2.486739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.314326  103889 node_tree.go:86] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I1019 01:46:57.318749  103889 node_tree.go:86] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I1019 01:46:57.319526  103889 httplog.go:90] POST /api/v1/nodes: (4.881789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.323882  103889 scheduling_queue.go:841] About to try and schedule pod taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/testpod-1
I1019 01:46:57.323908  103889 scheduler.go:598] Attempting to schedule pod: taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/testpod-1
I1019 01:46:57.324048  103889 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/pods: (4.009545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.324458  103889 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/testpod-1", node "node-0"
I1019 01:46:57.324529  103889 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/testpod-1", node "node-0": all PVCs bound and nothing to do
I1019 01:46:57.324617  103889 factory.go:703] Attempting to bind testpod-1 to node-0
I1019 01:46:57.328116  103889 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/pods/testpod-1/binding: (3.001218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.328575  103889 scheduler.go:737] pod taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/testpod-1 is bound successfully on node "node-0", 3 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>; Allocatable: CPU<4>|Memory<16Gi>|Pods<110>|StorageEphemeral<0>.".
I1019 01:46:57.333531  103889 node_lifecycle_controller.go:737] Controller observed a Node deletion: node-0
I1019 01:46:57.333553  103889 controller_utils.go:157] Recording Removing Node node-0 from Controller event message for node node-0
I1019 01:46:57.333582  103889 node_lifecycle_controller.go:737] Controller observed a Node deletion: node-1
I1019 01:46:57.333588  103889 controller_utils.go:157] Recording Removing Node node-1 from Controller event message for node node-1
I1019 01:46:57.333598  103889 node_lifecycle_controller.go:737] Controller observed a Node deletion: node-2
I1019 01:46:57.333603  103889 controller_utils.go:157] Recording Removing Node node-2 from Controller event message for node node-2
I1019 01:46:57.333633  103889 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"d7de0b16-9617-4898-8ce8-b2e9bc523bf8", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-0 event: Removing Node node-0 from Controller
I1019 01:46:57.333653  103889 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"452aba85-74cf-45d7-819a-4571ab45064f", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-2 event: Removing Node node-2 from Controller
I1019 01:46:57.333665  103889 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"03be2b8e-bda8-4349-a9cf-805d605962e2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-1 event: Removing Node node-1 from Controller
I1019 01:46:57.333818  103889 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/events: (4.375167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.336638  103889 httplog.go:90] POST /api/v1/namespaces/default/events: (2.354713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:46:57.339199  103889 httplog.go:90] POST /api/v1/namespaces/default/events: (2.0551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:46:57.342121  103889 httplog.go:90] POST /api/v1/namespaces/default/events: (2.355799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:46:57.401334  103889 shared_informer.go:227] caches populated
I1019 01:46:57.401371  103889 shared_informer.go:204] Caches are synced for taint 
I1019 01:46:57.401450  103889 node_lifecycle_controller.go:725] Controller observed a new Node: "node-0"
I1019 01:46:57.401465  103889 controller_utils.go:157] Recording Registered Node node-0 in Controller event message for node node-0
I1019 01:46:57.401498  103889 node_lifecycle_controller.go:1282] Initializing eviction metric for zone: region1:�:zone1
I1019 01:46:57.401524  103889 node_lifecycle_controller.go:725] Controller observed a new Node: "node-1"
I1019 01:46:57.401530  103889 controller_utils.go:157] Recording Registered Node node-1 in Controller event message for node node-1
I1019 01:46:57.401541  103889 node_lifecycle_controller.go:725] Controller observed a new Node: "node-2"
I1019 01:46:57.401547  103889 controller_utils.go:157] Recording Registered Node node-2 in Controller event message for node node-2
W1019 01:46:57.401612  103889 node_lifecycle_controller.go:978] Missing timestamp for Node node-0. Assuming now as a timestamp.
W1019 01:46:57.401799  103889 node_lifecycle_controller.go:978] Missing timestamp for Node node-1. Assuming now as a timestamp.
W1019 01:46:57.401859  103889 node_lifecycle_controller.go:978] Missing timestamp for Node node-2. Assuming now as a timestamp.
I1019 01:46:57.401924  103889 node_lifecycle_controller.go:1182] Controller detected that zone region1:�:zone1 is now in state Normal.
I1019 01:46:57.402171  103889 taint_manager.go:186] Starting NoExecuteTaintManager
I1019 01:46:57.402299  103889 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I1019 01:46:57.402311  103889 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I1019 01:46:57.402316  103889 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"d79018a8-9a3c-4413-903c-de4192d02671", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I1019 01:46:57.402343  103889 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"fb43ff17-89b7-4538-b1ac-01fe95d00811", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I1019 01:46:57.402357  103889 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"589e4a92-c0ce-4674-bf56-0a7d576537f0", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I1019 01:46:57.402320  103889 taint_manager.go:438] Updating known taints on node node-0: []
I1019 01:46:57.402380  103889 taint_manager.go:459] All taints were removed from the Node node-0. Cancelling all evictions...
I1019 01:46:57.402386  103889 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I1019 01:46:57.402394  103889 taint_manager.go:438] Updating known taints on node node-2: []
I1019 01:46:57.402409  103889 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9", Name:"testpod-1"}
I1019 01:46:57.402429  103889 taint_manager.go:438] Updating known taints on node node-1: []
I1019 01:46:57.404884  103889 httplog.go:90] POST /api/v1/namespaces/default/events: (2.357496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.407451  103889 httplog.go:90] POST /api/v1/namespaces/default/events: (1.969474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.409260  103889 httplog.go:90] POST /api/v1/namespaces/default/events: (1.277578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.426912  103889 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/pods/testpod-1: (1.951188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.428649  103889 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/pods/testpod-1: (1.165325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.431418  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.233853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.434454  103889 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.535066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.435169  103889 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (380.058µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.438221  103889 httplog.go:90] PATCH /api/v1/nodes/node-0: (2.438035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.438739  103889 controller_utils.go:193] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-10-19 01:46:57.434570565 +0000 UTC m=+250.627473512,}] Taint to Node node-0
I1019 01:46:57.438785  103889 controller_utils.go:205] Made sure that Node node-0 has no [] Taint
I1019 01:46:57.537003  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.547496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.562676  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.562721  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.562756  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.562764  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.562854  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.562892  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.563998  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:57.636422  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.322373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.736741  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.601398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.836772  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.651053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:57.936904  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.743336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.032232  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.032286  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.032377  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.033489  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.035158  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.036321  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.036928  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.817041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.098952  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.099156  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.099155  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.099449  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.102926  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.102973  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.137278  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.117004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.236916  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.237251  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.123154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.303475  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.337056  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.81572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.437050  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.845587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.536944  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.711562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.562889  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.562994  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.562887  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.562928  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.562964  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.563074  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.564146  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:58.637179  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.037904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.737066  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.854931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.836657  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.590054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:58.937883  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.71793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.032399  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.032498  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.032454  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.033605  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.035373  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.036499  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.037136  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.053322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.099339  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.099362  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.099854  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.099932  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.107548  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.107593  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.136957  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.797254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.236726  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.561856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.237125  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.303885  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.337047  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.807151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.436405  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.289025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.536638  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.527437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.563113  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.563147  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.563115  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.563113  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.563127  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.563239  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.564296  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:46:59.636997  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.8628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.737022  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.855405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.836782  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.58568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:46:59.936939  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.708254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.032577  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.032591  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.032596  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.034109  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.035793  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.036601  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.037163  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.949003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.099537  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.099645  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.099971  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.100071  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.107801  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.107816  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.136882  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.775768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.237299  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.237738  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.549027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.304060  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.337355  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.601569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.436935  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.706003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.465570  103889 httplog.go:90] GET /api/v1/namespaces/default: (1.615903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:00.467632  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.509012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:00.470474  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.293698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:00.543652  103889 httplog.go:90] GET /api/v1/nodes/node-0: (8.479896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.563291  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.563322  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.563321  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.563340  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.563332  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.563418  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.564421  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:00.636858  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.759421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.736909  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.67601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.836782  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.580538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:00.944853  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.608126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.032892  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.033016  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.033051  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.034248  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.035977  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.036579  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.51999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.036925  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.100133  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.100206  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.100205  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.100419  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.108060  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.108129  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.136858  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.705051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.237172  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.032816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.237433  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.304258  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.337265  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.003659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.436768  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.760533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.536580  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.369621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.563418  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.563445  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.563457  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.563476  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.563475  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.563490  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.564580  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:01.638389  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.320154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.736654  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.497488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.837938  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.486718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:01.936620  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.433816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.033169  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.033194  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.033220  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.034355  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.036424  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.036684  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.586987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.037099  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.100348  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.100372  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.100479  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.100781  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.108255  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.108297  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.137002  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.81973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.236963  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.8752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.237600  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.304480  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.337234  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.973482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.402178  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 5.000546292s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I1019 01:47:02.402269  103889 node_lifecycle_controller.go:1050] Condition MemoryPressure of node node-0 was never updated by kubelet
I1019 01:47:02.402280  103889 node_lifecycle_controller.go:1050] Condition DiskPressure of node node-0 was never updated by kubelet
I1019 01:47:02.402298  103889 node_lifecycle_controller.go:1050] Condition PIDPressure of node node-0 was never updated by kubelet
I1019 01:47:02.406217  103889 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.47153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.407435  103889 node_lifecycle_controller.go:814] Node node-0 is NotReady as of 2019-10-19 01:47:02.407418528 +0000 UTC m=+255.600321486. Adding it to the Taint queue.
I1019 01:47:02.407497  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 5.005679004s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I1019 01:47:02.407526  103889 node_lifecycle_controller.go:1050] Condition MemoryPressure of node node-1 was never updated by kubelet
I1019 01:47:02.407535  103889 node_lifecycle_controller.go:1050] Condition DiskPressure of node node-1 was never updated by kubelet
I1019 01:47:02.407542  103889 node_lifecycle_controller.go:1050] Condition PIDPressure of node node-1 was never updated by kubelet
I1019 01:47:02.409090  103889 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (1.651692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.410822  103889 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.217924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56060]
I1019 01:47:02.412533  103889 httplog.go:90] GET /api/v1/nodes/node-1?resourceVersion=0: (569.187µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56060]
I1019 01:47:02.413752  103889 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (541.663µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56060]
I1019 01:47:02.414714  103889 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.166837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.415032  103889 controller_utils.go:193] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-10-19 01:47:02.406024497 +0000 UTC m=+255.598927451,}] Taint to Node node-0
I1019 01:47:02.415724  103889 httplog.go:90] GET /api/v1/nodes/node-0?resourceVersion=0: (424.132µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56060]
I1019 01:47:02.417426  103889 httplog.go:90] PATCH /api/v1/nodes/node-1: (3.174858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56062]
I1019 01:47:02.418538  103889 controller_utils.go:193] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-10-19 01:47:02.411255195 +0000 UTC m=+255.604158150,}] Taint to Node node-1
I1019 01:47:02.418577  103889 controller_utils.go:205] Made sure that Node node-1 has no [] Taint
I1019 01:47:02.418614  103889 httplog.go:90] PATCH /api/v1/nodes/node-0: (3.515039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.418938  103889 controller_utils.go:193] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-10-19 01:47:02.410279241 +0000 UTC m=+255.603182213,}] Taint to Node node-0
I1019 01:47:02.419292  103889 controller_utils.go:205] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I1019 01:47:02.419467  103889 controller_utils.go:169] Recording status change NodeNotReady event message for node node-1
I1019 01:47:02.419799  103889 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"589e4a92-c0ce-4674-bf56-0a7d576537f0", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-1 status is now: NodeNotReady
I1019 01:47:02.420023  103889 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I1019 01:47:02.420041  103889 taint_manager.go:438] Updating known taints on node node-0: [{node.kubernetes.io/unreachable  NoExecute 2019-10-19 01:47:02 +0000 UTC}]
I1019 01:47:02.420115  103889 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/testpod-1 at 2019-10-19 01:47:02.420104365 +0000 UTC m=+255.613007323 to be fired at 2019-10-19 01:52:02.420104365 +0000 UTC m=+555.613007323
I1019 01:47:02.421662  103889 store.go:365] GuaranteedUpdate of /dd1980d2-093b-4941-9c51-127e644c1645/minions/node-0 failed because of a conflict, going to retry
I1019 01:47:02.423180  103889 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-1: (3.502564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.423358  103889 controller_utils.go:116] Update ready status of pods on node [node-1]
I1019 01:47:02.423440  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 5.021570939s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I1019 01:47:02.423495  103889 node_lifecycle_controller.go:1050] Condition MemoryPressure of node node-2 was never updated by kubelet
I1019 01:47:02.423505  103889 node_lifecycle_controller.go:1050] Condition DiskPressure of node node-2 was never updated by kubelet
I1019 01:47:02.423511  103889 node_lifecycle_controller.go:1050] Condition PIDPressure of node node-2 was never updated by kubelet
I1019 01:47:02.423722  103889 httplog.go:90] POST /api/v1/namespaces/default/events: (3.80317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56062]
I1019 01:47:02.424907  103889 httplog.go:90] PATCH /api/v1/nodes/node-0: (6.537323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56060]
I1019 01:47:02.425123  103889 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I1019 01:47:02.425146  103889 taint_manager.go:438] Updating known taints on node node-0: []
I1019 01:47:02.425157  103889 taint_manager.go:459] All taints were removed from the Node node-0. Cancelling all evictions...
I1019 01:47:02.425166  103889 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/testpod-1 at 2019-10-19 01:47:02.425163425 +0000 UTC m=+255.618066375
I1019 01:47:02.425232  103889 controller_utils.go:205] Made sure that Node node-0 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-10-19 01:46:57 +0000 UTC,}] Taint
I1019 01:47:02.425279  103889 event.go:262] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9", Name:"testpod-1", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/testpod-1
I1019 01:47:02.426437  103889 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.640797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.426670  103889 controller_utils.go:169] Recording status change NodeNotReady event message for node node-2
I1019 01:47:02.426896  103889 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"d79018a8-9a3c-4413-903c-de4192d02671", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node node-2 status is now: NodeNotReady
I1019 01:47:02.429290  103889 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (465.355µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56064]
I1019 01:47:02.429796  103889 httplog.go:90] POST /api/v1/namespaces/taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/events: (4.357134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56060]
I1019 01:47:02.429868  103889 httplog.go:90] POST /api/v1/namespaces/default/events: (2.048465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56062]
I1019 01:47:02.432884  103889 httplog.go:90] GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-2: (6.011121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.433303  103889 controller_utils.go:116] Update ready status of pods on node [node-2]
I1019 01:47:02.433357  103889 node_lifecycle_controller.go:1132] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1019 01:47:02.434322  103889 httplog.go:90] PATCH /api/v1/nodes/node-2: (3.727734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56062]
I1019 01:47:02.434845  103889 controller_utils.go:193] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-10-19 01:47:02.427770778 +0000 UTC m=+255.620673739,}] Taint to Node node-2
I1019 01:47:02.434878  103889 controller_utils.go:205] Made sure that Node node-2 has no [] Taint
I1019 01:47:02.436290  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.243735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.537085  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.957916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.563598  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.563621  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.563598  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.563656  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.563662  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.563811  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.564810  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:02.637746  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.824178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.736815  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.668143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.836773  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.615635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:02.937178  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.061887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.033386  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.033537  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.033560  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.034775  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.036594  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.037109  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.505754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.037226  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.100530  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.100543  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.100591  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.100966  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.108412  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.108447  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.136761  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.458023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.237119  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.945474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.237752  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.304822  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.337435  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.235023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.437003  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.885143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.537457  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.332105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.563753  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.563796  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.563763  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.563763  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.563776  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.564161  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.565166  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:03.638136  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.893246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.737274  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.060677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.837134  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.713967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:03.937019  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.782458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.033579  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.033671  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.033714  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.034924  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.036756  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.037985  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.813869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.038393  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.100719  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.100799  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.100812  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.101119  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.108621  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.108786  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.137673  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.138972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.238074  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.238749  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.567297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.305250  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.336835  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.625209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.437421  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.253974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.537231  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.056091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.563966  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.563972  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.563976  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.563990  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.564151  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.564308  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.565332  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:04.638071  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.880443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.737827  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.459326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.838620  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.384844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:04.937820  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.568412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.033839  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.033867  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.033913  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.035158  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.036867  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.037241  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.980066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.038560  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.100954  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.100960  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.100961  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.101448  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.108747  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.109081  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.138292  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.586973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.237496  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.300353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.238286  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.305449  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.337441  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.217291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.437256  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.139099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.538223  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.067215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.564129  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.564151  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.564136  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.564228  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.564229  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.564441  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.565465  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:05.637630  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.44054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.736718  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.560271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.836815  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.642885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:05.937106  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.013118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.034049  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.034059  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.034119  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.035725  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.037190  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.038600  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.472206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.038725  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.101130  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.101153  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.101156  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.101607  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.108901  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.109279  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.137202  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.965657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.236792  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.675652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.238442  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.305627  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.337225  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.971938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.436983  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.84099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.537033  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.764089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.564327  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.564381  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.564386  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.564414  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.564396  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.564568  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.565678  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:06.636732  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.50515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.736877  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.678967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.837306  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.019098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.843583  103889 httplog.go:90] GET /api/v1/namespaces/default: (1.961408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:47:06.845786  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.488841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:47:06.847264  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.060229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:47:06.909139  103889 httplog.go:90] GET /api/v1/namespaces/default: (2.471258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.911503  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.528048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.913143  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.215815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:06.937189  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.056574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.034249  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.034276  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.034266  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.035920  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.036641  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.464359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.037322  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.038913  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.101347  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.101375  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.101347  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.102096  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.109085  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.109433  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.137365  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.122023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.236814  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.389438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.238607  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.305848  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.336900  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.75217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.433821  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 10.032186915s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:07.434510  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 10.032878872s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:07.434611  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 10.032988317s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:07.434715  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 10.033092935s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:07.435839  103889 node_lifecycle_controller.go:827] Node node-0 is unresponsive as of 2019-10-19 01:47:07.435818293 +0000 UTC m=+260.628721248. Adding it to the Taint queue.
I1019 01:47:07.435902  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 10.034083096s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:07.435927  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 10.034109693s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:07.435946  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 10.034128943s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:07.435961  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 10.034144133s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:07.436132  103889 node_lifecycle_controller.go:827] Node node-1 is unresponsive as of 2019-10-19 01:47:07.436119948 +0000 UTC m=+260.629022904. Adding it to the Taint queue.
I1019 01:47:07.436327  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 10.034458093s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:07.436457  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 10.034587613s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:07.436531  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 10.034662402s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:07.436582  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 10.034714589s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:07.436683  103889 node_lifecycle_controller.go:827] Node node-2 is unresponsive as of 2019-10-19 01:47:07.436673574 +0000 UTC m=+260.629576548. Adding it to the Taint queue.
I1019 01:47:07.436868  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.737986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.536649  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.508359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.564488  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.564548  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.564598  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.564524  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.564638  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.564745  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.565874  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:07.636950  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.764203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.736986  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.839538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.837199  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.935438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:07.937004  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.793944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.034410  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.034494  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.034584  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.036066  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.036788  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.623665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.037470  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.039075  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.101601  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.101637  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.101603  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.102248  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.109280  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.109595  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.137935  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.819535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.236800  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.541719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.238774  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.306043  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.337189  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.900232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.436900  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.680287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.537137  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.668265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.564749  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.564802  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.564822  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.564842  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.564929  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.565001  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.566229  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:08.637272  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.842538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.737202  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.148535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.836748  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.581722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:08.937344  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.082125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.034569  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.034619  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.034714  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.036387  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.036831  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.733269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.037704  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.039256  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.101766  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.101776  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.101779  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.102409  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.109448  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.109738  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.136744  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.634097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.237212  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.013964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.239072  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.306232  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.338440  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.240042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.437813  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.620383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.537306  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.025594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.564942  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.564965  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.564986  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.565006  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.565154  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.565195  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.566361  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:09.637188  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.998111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.737647  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.521609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.837611  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.182571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:09.938632  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.453367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.034753  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.034777  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.034875  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.036506  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.037681  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.336945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.037837  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.039463  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.101987  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.101987  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.102242  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.102586  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.109604  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.109856  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.137640  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.385875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.237958  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.711398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.239234  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.306550  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.337443  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.112466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.440084  103889 httplog.go:90] GET /api/v1/nodes/node-0: (4.879309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.465850  103889 httplog.go:90] GET /api/v1/namespaces/default: (1.709196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:10.468595  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.762016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:10.470594  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.373823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:10.537125  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.825113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.565396  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.565498  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.565765  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.565796  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.565975  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.566091  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.566623  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:10.636973  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.533462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.736796  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.70046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.836895  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.688256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:10.937312  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.195863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.034899  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.034956  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.035089  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.036651  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.037949  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.038063  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.777563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.039572  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.102116  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.102178  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.102420  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.103315  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.109993  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.109992  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.137214  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.06773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.236974  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.847298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.239433  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.306754  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.336788  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.611644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.436892  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.748226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.537041  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.852597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.565595  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.565622  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.565898  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.565934  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.566117  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.566251  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.566801  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:11.637633  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.412132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.737424  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.025802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.837208  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.007583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:11.941001  103889 httplog.go:90] GET /api/v1/nodes/node-0: (5.87361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.035078  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.035123  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.035190  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.036809  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.037464  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.040222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.038120  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.039901  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.102270  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.102314  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.102597  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.103434  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.110173  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.110210  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.137977  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.772875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.238239  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.962282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.239602  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.306921  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.337494  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.038896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.436999  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 15.035369784s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:12.437273  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 15.035644971s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:12.437402  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 15.035779923s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:12.437498  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 15.035876611s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:12.437749  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 15.035929029s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:12.437865  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 15.036046061s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:12.437969  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 15.036150139s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:12.438082  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 15.03626303s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:12.438810  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 15.036940518s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:12.439015  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 15.037141853s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:12.439104  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 15.037237192s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:12.439200  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 15.037331813s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:12.439530  103889 httplog.go:90] GET /api/v1/nodes/node-0: (4.28492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.537159  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.900696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.565763  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.565763  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.566082  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.566083  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.566405  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.566441  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.566982  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:12.641541  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.313966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.737124  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.897377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.837491  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.056203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:12.937484  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.017797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.035242  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.035330  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.035355  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.036936  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.037023  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.879231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.038277  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.040052  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.102427  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.102439  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.102753  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.103599  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.110393  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.110522  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.136801  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.661945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.237981  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.826776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.239860  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.307105  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.337425  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.312315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.437420  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.107813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.536911  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.602723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.565951  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.566251  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.565960  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.566225  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.566542  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.566602  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.567140  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:13.637983  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.794264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.738488  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.344898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.837100  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.880551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:13.937510  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.319532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.035586  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.035597  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.035625  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.037088  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.038278  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.0675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.038397  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.040227  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.102581  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.102587  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.102876  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.103818  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.110581  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.110674  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.137267  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.11867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.237241  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.053238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.240195  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.307286  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.337626  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.386756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.437025  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.829425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.538155  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.811556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.566409  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.566438  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.566414  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.566462  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.566637  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.566777  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.567263  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:14.637114  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.901812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.737060  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.860701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.837787  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.549337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:14.937320  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.144663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.035954  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.035954  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.036120  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.037263  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.037266  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.066157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.038537  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.040372  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.102750  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.102757  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.103023  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.104075  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.110773  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.110791  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.137273  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.058258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.240394  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.240526  103889 httplog.go:90] GET /api/v1/nodes/node-0: (4.497663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.307489  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.337589  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.366141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.436898  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.840345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.536661  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.513618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.566566  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.566586  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.566566  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.566632  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.566744  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.566956  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.567426  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:15.637020  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.80835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.737013  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.759705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.838515  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.239387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:15.937134  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.852834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.036315  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.036434  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.036450  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.037528  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.037786  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.514487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.038738  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.040568  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.102957  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.103077  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.103222  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.104329  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.110960  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.110958  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.137919  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.818207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.237653  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.397919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.240571  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.307674  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.338107  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.877829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.437293  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.118141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.537535  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.239716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.566768  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.566798  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.566770  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.566782  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.566877  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.567205  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.567637  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:16.637020  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.748486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.736730  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.5442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.836940  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.686561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.843846  103889 httplog.go:90] GET /api/v1/namespaces/default: (2.090059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:47:16.845735  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.371833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:47:16.847524  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.349016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:47:16.908260  103889 httplog.go:90] GET /api/v1/namespaces/default: (1.588543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.910176  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.329242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.912163  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.524319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:16.937067  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.826727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.036529  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.036570  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.036649  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.037258  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.013665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.037754  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.038891  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.040761  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.103161  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.103206  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.103417  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.104719  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.111155  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.111156  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.136969  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.808637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.237186  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.038399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.240686  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.307999  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.337667  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.560659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.437013  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.761104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.439635  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 20.037805581s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:17.439730  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 20.03791124s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:17.439754  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 20.037936008s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:17.439770  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 20.037953097s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:17.439841  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 20.037974138s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:17.439866  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 20.03799991s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:17.439886  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 20.038019563s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:17.439903  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 20.038036651s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:17.439964  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 20.038341238s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:17.439987  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 20.038366289s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:17.440000  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 20.038379879s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:17.440016  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 20.03839585s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:17.537294  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.126146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.566958  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.566998  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.567008  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.567023  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.567028  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.567379  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.567920  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:17.636913  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.771849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.737841  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.598205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.837195  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.87918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:17.937005  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.836996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:18.036718  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.036784  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.036882  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.037111  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.983667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:18.037991  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.039078  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.040936  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.103357  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.103357  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.103634  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.104962  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.111259  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.111402  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.137453  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.253962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:18.236630  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.568893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:18.240854  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.308176  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.338898  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.687621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
E1019 01:47:18.416149  103889 factory.go:687] Error getting pod allocatablef898bbc9-ca70-4f5f-b404-e9a8ab732aaa/pod-test-allocatable2 for retry: Get http://127.0.0.1:39267/api/v1/namespaces/allocatablef898bbc9-ca70-4f5f-b404-e9a8ab732aaa/pods/pod-test-allocatable2: dial tcp 127.0.0.1:39267: connect: connection refused; retrying...
I1019 01:47:18.437038  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.845694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:18.537309  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.13531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:18.567163  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.567188  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.567190  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.567169  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.567163  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.567522  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.568056  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:18.637196  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.079955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:18.737772  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.847157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:18.837341  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.160177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:18.937000  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.759915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.036761  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.662201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.036918  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.036948  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.037038  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.038124  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.039249  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.041081  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.103521  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.103560  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.103739  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.105130  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.111443  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.111547  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.136921  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.782114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.238357  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.249779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.241098  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.308384  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.337623  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.404651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.437407  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.096079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.537179  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.981979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.567356  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.567397  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.567367  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.567368  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.567374  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.567742  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.568248  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:19.637224  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.992671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.737204  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.933656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.837334  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.155825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:19.937093  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.874754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.037076  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.037102  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.037177  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.037730  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.299872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.038300  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.039550  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.042633  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.103744  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.103906  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.103764  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.105323  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.111731  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.111982  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.138160  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.940996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.236946  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.764937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.241441  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.308609  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.338017  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.757307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.436916  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.868153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.465910  103889 httplog.go:90] GET /api/v1/namespaces/default: (1.80854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:20.467947  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.531137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:20.471075  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.704535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:20.536837  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.720802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.567562  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.567603  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.567588  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.567631  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.567645  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.567962  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.568406  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:20.637189  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.879062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.738007  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.022467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.836752  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.532527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:20.937364  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.228232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.037291  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.037291  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.037543  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.038281  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.112139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.038447  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.039747  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.042809  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.104037  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.104273  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.104297  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.105640  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.112100  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.112156  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.137006  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.811727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.238270  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.962079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.241642  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.308767  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.336921  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.645244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.437161  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.929948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.537127  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.878448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.567751  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.567800  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.567814  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.567829  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.567843  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.568368  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.568543  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:21.637376  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.921022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.738987  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.756115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.837116  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.851335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:21.936962  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.77141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.036888  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.729112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.037476  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.037476  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.038504  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.038636  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.039921  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.042972  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.104241  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.104473  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.104479  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.106141  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.112426  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.112648  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.137256  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.910013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.237384  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.099769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.241814  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.308967  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.338635  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.397328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.437102  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.927031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.440309  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 25.038676639s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:22.440393  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 25.038767152s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:22.440416  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 25.038795756s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:22.440434  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 25.038813174s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:22.440526  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 25.038706805s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:22.441574  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 25.039745289s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:22.441614  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 25.039795994s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:22.441644  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 25.039826318s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:22.441798  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 25.03992955s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:22.441819  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 25.039953091s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:22.441835  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 25.039968072s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:22.441848  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 25.039981287s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:22.536727  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.572872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.567954  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.568018  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.568036  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.568048  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.568062  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.568616  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.568836  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:22.638258  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.110188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.740207  103889 httplog.go:90] GET /api/v1/nodes/node-0: (4.193353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.836805  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.714528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:22.937326  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.142723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.037030  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.792224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.037633  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.037647  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.038732  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.038756  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.040070  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.043185  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.104448  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.104658  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.104787  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.106379  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.112736  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.112940  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.138404  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.125832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.237522  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.296127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.242059  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.309208  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.337230  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.077494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.437383  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.20556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.538644  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.809768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.568114  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.568159  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.568177  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.568193  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.568202  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.568777  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.569270  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:23.637471  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.193479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.737036  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.866639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.837919  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.762038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:23.937378  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.12921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.037416  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.189439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.037750  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.037831  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.038927  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.038938  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.040191  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.043370  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.104717  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.104797  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.104959  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.106595  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.112928  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.113229  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.137217  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.931966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.237346  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.091212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.242229  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.309721  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.337183  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.986185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.436808  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.667955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.536970  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.793002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.568322  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.568357  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.568343  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.568375  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.568420  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.568921  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.569493  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:24.637387  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.233327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.736971  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.754466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.837098  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.823796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:24.937392  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.881406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.037911  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.794439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.037975  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.038124  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.039672  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.039816  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.040393  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.043657  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.104916  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.105094  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.105122  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.106833  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.113567  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.113593  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.137230  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.033943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.237189  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.969136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.242465  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.310028  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.337048  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.916405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.437144  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.987486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.536977  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.719404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.568488  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.568522  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.568499  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.568493  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.568555  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.569088  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.569732  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:25.638339  103889 httplog.go:90] GET /api/v1/nodes/node-0: (3.014268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.737205  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.038916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.837316  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.141056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:25.937118  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.764126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.036775  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.620135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.038175  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.038292  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.039849  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.039977  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.040559  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.043807  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.105176  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.105235  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.105254  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.107021  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.113793  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.113812  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.137831  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.316509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.237003  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.912958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.242615  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.310210  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.337840  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.573776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.437677  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.575612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.537077  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.86553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.568686  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.568750  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.568725  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.568724  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.568747  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.569278  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.569883  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:26.636983  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.8605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.737207  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.986886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.838015  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.705449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.843586  103889 httplog.go:90] GET /api/v1/namespaces/default: (1.779394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:47:26.845562  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.456555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:47:26.847201  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.140694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1019 01:47:26.908373  103889 httplog.go:90] GET /api/v1/namespaces/default: (1.596607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.910130  103889 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.189469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.911828  103889 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.140663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:26.937103  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.840722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:27.037068  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.988971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:27.038326  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.038415  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.040075  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.040103  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.040707  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.044028  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.105481  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.105489  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.105492  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.107206  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.113965  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.114002  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.137061  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.8847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:27.174640  103889 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.562013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:27.176845  103889 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.538782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:27.178536  103889 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.283127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I1019 01:47:27.237096  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.899878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:27.242778  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.310463  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.337888  103889 httplog.go:90] GET /api/v1/nodes/node-0: (2.487712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:27.437099  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.846098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:27.439476  103889 httplog.go:90] GET /api/v1/nodes/node-0: (1.594804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
Oct 19 01:47:27.440: INFO: Waiting up to 15s for pod "testpod-1" in namespace "taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9" to be "updated with tolerationSeconds=300"
I1019 01:47:27.441540  103889 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/pods/testpod-1: (1.295179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
Oct 19 01:47:27.442: INFO: Pod "testpod-1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.975672ms
Oct 19 01:47:27.442: INFO: Pod "testpod-1" satisfied condition "updated with tolerationSeconds=300"
I1019 01:47:27.442125  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 30.040497469s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:27.442163  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 30.040541528s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:27.442191  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 30.040570023s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:27.442207  103889 node_lifecycle_controller.go:1060] node node-0 hasn't been updated for 30.040586202s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:27.442321  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 30.040502491s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:27.442356  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 30.040538373s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:27.442370  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 30.040553532s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:27.442382  103889 node_lifecycle_controller.go:1060] node node-1 hasn't been updated for 30.040564685s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:27.442432  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 30.04056572s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1019 01:47:27.442449  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 30.040582014s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:27.442458  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 30.040592795s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:27.442485  103889 node_lifecycle_controller.go:1060] node node-2 hasn't been updated for 30.040619785s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-10-19 01:46:57 +0000 UTC,LastTransitionTime:2019-10-19 01:47:02 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1019 01:47:27.448116  103889 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9", Name:"testpod-1"}
I1019 01:47:27.448351  103889 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/pods/testpod-1: (5.598618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:27.451206  103889 httplog.go:90] GET /api/v1/namespaces/taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/pods/testpod-1: (1.279403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:27.473722  103889 node_tree.go:100] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I1019 01:47:27.473783  103889 taint_manager.go:422] Noticed node deletion: "node-0"
I1019 01:47:27.484136  103889 node_tree.go:100] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I1019 01:47:27.484221  103889 taint_manager.go:422] Noticed node deletion: "node-1"
I1019 01:47:27.486876  103889 httplog.go:90] DELETE /api/v1/nodes: (35.180796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55392]
I1019 01:47:27.487737  103889 node_tree.go:100] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I1019 01:47:27.487956  103889 taint_manager.go:422] Noticed node deletion: "node-2"
I1019 01:47:27.568880  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.568902  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.568923  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.568958  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.568965  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.569466  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:27.570035  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.038537  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.038656  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.040242  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.043038  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.043047  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.044262  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.105678  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.105865  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.105888  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.107385  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.114140  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.114409  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.242955  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
I1019 01:47:28.310674  103889 reflector.go:268] k8s.io/client-go/informers/factory.go:134: forcing resync
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_with_no_pod_tolerations (35.09s)
        taint_test.go:770: Failed to taint node in test 1 <node-0>, err: timed out waiting for the condition

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191019-013631.xml

Find taint-based-evictions2d5bd2eb-3e9b-4f4d-9205-fc148b97f1b9/testpod-1 mentions in log files | View test history on testgrid


Show 2897 Passed Tests

Show 4 Skipped Tests